
Security.txt - janvdberg
https://securitytxt.org/
======
tptacek
I don't think this is fully baked.

Most importantly: there are no formal definitions for what these disclosure
terms mean, the distinctions between them are not unimportant, and the most
important distinctions have nothing at all to do with "disclosure".

To me, as a working professional in this field, "disclosure: full" informs me
that if I report a vulnerability in your service, you'll publish some sort of
public advisory, or agree to me doing so, on my own timeline, regardless of
when you're fully patched and regardless of whether you've confirmed or
triaged the bug.

Is that _exactly_ your understanding of "disclosure: full"? If not, there's
your first minor problem.

But there's more important problems. Actually, I don't need any permission
whatsoever to publish a vulnerability I find in your site, _nor is there any
real norm in the infosec community that I shouldn 't_ ("responsible
disclosure" is a meme propagated by vendors who want to control independent
researchers). You can state your _preference_ that I coordinate with you
("coordinated disclosure" is the researcher-friendly way to write "responsible
disclosure"), but you can't dictate that to me --- and the suggestion that you
could is presumptuous and rude.

But that's still not the big problem. The BIG problem is that _disclosure
policy has nothing to do with whether you can test_.

There is a widespread belief on the US Internet that light web application
testing of SaaS products is allowed. It is very much not. Without explicit
permission to test a website I put on the Internet, your doing so violates
(and probably _should_ violate!) CFAA, and in some states also state statutes.
That goes not just for disruptive testing, like seeing if you can break a SQL
query, but also for stuff that most testers believe is harmless, like light
XSS testing. I can't tell you how many times an XSS test vector "broke" a site
for a client, for instance when it got cached as a "recent search" and
displayed, in a DOM-breaking way, to every user on the home page for the site.
Bring down a site like that without permission, and even if it says
"disclosure: full", you might be civilly liable for the downtime.

I think the "disclosure" field here is thus pretty ill-advised. It advertises
a policy that isn't super useful for researchers to know, and doesn't define
the most important policy, which is "am I allowed to test this site and what
are the rules of engagement for doing so".

What does that leave us with? A PGP key and the name of your security email
address. The PGP key is useful, but the contact address should just always be
"security@" anyways.

There's really no need for a standard format for this stuff. Just create a
"security.html" or whatever that reminds people to send mail to security@,
publishes your PGP (note to Symantec: _public key_ ) key, and gives people
permission to test the site.

 _Update_

One of the authors DM'd me to say they're removing "Disclosure" from the next
version of the draft, which is I think the right move.

~~~
developer2
Regarding your analysis as to the actual usefulness of this spec - or lack
thereof - I agree. There's not much substance or detail; the result seems like
the bullet points of an initial PowerPoint presentation for such a concept,
rather than the final output of a panel attempting to create a proposal meant
to be seriously considered and adopted.

As for the rest of your comment... are you really a "working professional in
this field"? By that I'm asking whether you are self-employed and playing fast
and loose according to your own rules, or if you instead have an established
career within the industry for which your professional peers stand beside your
methods? I have to believe that honest professionals with a shred of
reputability in this field would not be advocating that _playing nice_ , so to
speak, is some altruistic gift on the part of the researcher. "Security
Researchers", quoted to loosely include "rogue" grey and black hats who think
they have free reign to hack in any way they see fit, have gone to jail for
what you appear to be claiming is a risk-free "right". It's not black and
white, and the courts seem to favour whichever side suits their fancy in each
individual case.

It really bothers me to see anyone suggest that any public port on a machine
is 100% free reign to abuse. If public-facing access is all it takes to be
free game, then I must be morally and legally in the right to pick the door
locks of private citizens and businesses, or peel off the face of any ATM in
an effort to "gauge its security". The loophole excuse is that the Computer
Fraud and Abuse Act, in theory, only covers computer owned by the government
and financial institutions. When all moral obligations are ignored, and
skirting around lacklustre laws is the only defence, the intentions of some
"researchers" quickly become questionable.

Your stance on the subject is probably fairly common in terms of what people
_want_ to be labelled as fair game, while in the real world researchers have
to dial back a bit and _play nice_ in order to maintain an ounce of respect in
the field.

~~~
lucideer
> are you really a "working professional in this field"?

> I'm asking [...] if you [...] have an established career within the industry
> for which your professional peers stand beside your methods?

I'm not sure if you're already aware of tptacek's reputation in the field and
you're hinting at something different, or if you're asking these questions in
a more direct sense. If the latter, I'd recommend checking out his profile and
quickly Googling.

> in the real world researchers have to dial back a bit and play nice in order
> to maintain an ounce of respect in the field.

Despite what I mentioned above, it may be that this is actually true; that
Thomas' reputation gives him a certain level of immunity whereas most "normal"
researchers would have to stick to a stricter level of etiquette.

All said though, I think SolarNet may also be correct that you seem to have
misinterpreted at least some of Thomas' post.

~~~
darksim905
>tptacek's reputation

Could you explain? I never really understood him as an individual & he's
been... well, harsh at times to me & others.

------
dsacco
Good luck getting this adopted. A couple of months ago I was trying to
responsibly disclose the complete exposure of every customer's name, email
address, phone number and the last four digits of their credit card to a
public QSR company that allows online orders. It was straightforward enough
that I found it passively while trying to login.

It took over a week of me searching the website for a security page, trying to
contact support on Twitter, emailing security addresses that turned out to be
undeliverable, looking through my network for a contact, etc. Someone with a
better network than me finally put me in touch with the right person in the
security org, and the first response I received was, essentially, "Are you
trying to sell us something? This seems manipulative. Why didn't you email our
[undeliverable] address?"

As far as I'm aware, the vulnerability still hasn't been resolved. Here are
the problems I see:

1\. I don't know if this is a good implementation of this idea in the first
place - how does this handle liability? I agree with what tptacek mentioned in
this thread already: this seems underspecified, and companies looking to adopt
this will want to have specific assurances with regards to liability and
what's allowed. What exactly does "Disclosure Type: Full" mean?

2\. If a company is not already in the tech trendy group that like to host
security pages and use bug bounties, this is probably not even going to be on
their radar. What is different about this that will appeal to them? How do you
get a large, faceless organization which regards its security organization as
more of a risk/compliance/continuity division than a technical software
security division to adopt this standard?

To be clear, as someone who has gone through this song and dance several
times, I absolutely would like to see improvements. But I don't know that this
is the best way to do it. A more realistic standard might be a central tracker
that maintains a list of key security contacts and their email addresses at
various organizations. There are already lists like this floating around on
GitHub, but they tend to be extremely out of date. Other than that it would be
helpful to try and standardize a /security.html page with details about who
and where to contact (instead of, say, a page that assumes every customer
looking for a security contact mistakenly believes their accounts have been
"hacked").

~~~
freedomben
This is utterly maddening. I had a similar experience, except that at first
they didn't believe me regarding the vulnerability. I basically said, "cool,
I'm glad it's not a problem, so when I release the deets and write up a blog
post you have nothing to worry about ;-)"

After they finally acknowledged the problem they started treating me like I
was a criminal who attacked them. I strictly follow responsible disclosure,
but it's crap like that that makes me want to reconsider sometimes.

~~~
theyregreat
Yup. They often claim you’re “malicious” to cover their incompetent butts.
This is why reporting should be:

\- end-to-end encrypted with a brand new, limited-time GPG key

\- use a disposable email service

\- make up an alias

\- send from public WiFi at a coffeeshop some distance away that doesn’t have
corporate CCTV

\- Don’t bother with Tor or a VPN because it advertises “suspicious behavior”
across network hops that maybe/are monitored/logged

If you volunteer information about your identity, it can be easily misused to
attack you in a myriad of legal, professional, social media and other dirty-
tricks ways.

~~~
slipstream-
IMO, the problem with disclosure is one of the biggest problems with infosec
right now...

------
mjw1007
Should surely be .well-known/security.txt, or if not explain why not.

rfc5785 "Defining Well-Known Uniform Resource Identifiers (URIs)"

[edit: I see the FAQ says that, while the linked Internet Draft says it goes
at top-level like robots.txt]

~~~
skybrian
A bit off-topic, but I don't understand the popularity of hidden directories
for what should be human-visible files. Why be intentionally obscure rather
than discoverable?

~~~
dasil003
To avoid namespace collisions. It's not discoverable in any case because no
one has a directory listing as their homepage.

------
jacquesm
[https://securitytxt.org/.well-
known/security.txt](https://securitytxt.org/.well-known/security.txt)

Gives a 404

but

[https://securitytxt.org/security.txt](https://securitytxt.org/security.txt)

Has the file... maybe they should either follow their own advice _or_ update
the text of the advisory?

Like this it appears a bit inconsistent.

~~~
TheSwordsman
Per the Draft RFC `security.txt` is meant to be akin to `robots.txt`, in that
it's at the root of the path.

So yeah, it's a bit inconsistent.

~~~
jacquesm
That is going to cause some problems with a very large percentage of
webservers being configured to _never_ allow the fetching of any dotfiles to
avoid accidentally exposing stuff that wasn't meant to be in the web tree in
the first place.

~~~
raimue
/.well-known/ is defined in RFC5785 and paths under this prefix can be
registered with IETF. For example, /.well-known/acme-challenge/ is used by
ACME, the domain verification protocol used by Let's Encrypt.

[https://www.ietf.org/rfc/rfc5785.txt](https://www.ietf.org/rfc/rfc5785.txt)

It should not be too hard to make an exception for this path in your web
server configuration. Also note that URLs do not have to match the local
filesystem, so you could as well use an alias to a different local path.

------
manigandham
This is completely useless. Any company that cares about security enough to do
this will already have security@, support@, abuse@ emails setup. Those _are_
the well known contact endpoints. Bug bounty platforms also have contact info
and messaging built-in.

~~~
2bitencryption
what RFC describes that standard?

~~~
btown
Well, common sense for one, but also
[https://www.ietf.org/rfc/rfc2142.txt](https://www.ietf.org/rfc/rfc2142.txt)
section 4!

------
ikeboy
If you have a disclosure policy, chances are googling "company name +
disclosure" will bring it up. Don't really see benefits of standardising this,
not going to help any companies that previously had no disclosure policies
decide to implement one and "discoverability of existing disclosure policies"
doesn't really seem like a big issue.

~~~
thephyber
The more you use a Google feature as a crutch, the less the web is semantic.

Also, Google has increasingly made it difficult to automate searches, so the
effort required to notify multiple websites increases quickly.

~~~
r3bl
> Also, Google has increasingly made it difficult to automate searches

That's an understatement. Doing something as simple as

    
    
        inurl:"humans.txt"
    

...and consuming the first three pages showed me CAPTCHA. It was done via a
browser, manually. Not all links were clicked.

~~~
mad182
Yep, I run across this often. Kind of funny that using a bit more advanced
functionality instantly makes you suspicious.

~~~
clarry
A few years ago, my gmail account was suspended for suspicious activity after
I sent two messages to verify that my own email server is working following
some updates & configuration changes.

Since then, it's been hard to justify using gmail for anything serious.

------
rando444
A section for acknowledgements?

Also the Contributions AKA "these random accounts from twitter may have
participated to the project to some degree, but we don't want to disclose
their names or contributions" section of the website made me smile.

~~~
woodrowbarlow
according to the IETF draft, the acknowledgements directive is meant to point
to a page thanking people who have discovered and reported vulnerabilities in
the past.

doesn't seem very useful to me. since the document supports comments, just put
that link in a comment.

------
hitekker
This entire page reads as a stream of consciousness.

This one line is particularly egregious:

"Security.txt defines a standard to help organizations define the process for
security researchers to securely disclose security vulnerabilities."

> Defines a X to help Y define a Z for Q to W.

> Securely close security vulnerabilities.

Ick.

------
IncRnd
This site is horribly complex for creating a plain-text file with _four
fields_. Perhaps, the complexity comes from having _10 contributors_.

_Not everything needs to be a large project._ A template right on the webpage
would have the same value. That raises the point that a blog post about the
RFC would have greater utility.

~~~
hw
There are people out there who spend the time and effort to create a good
looking site, even for something as simple as generating a plain text file
with 4 fields. Your comment would have greater utility if it was actually
about the subject of the site itself, and not about how it was designed.

~~~
IncRnd
That's a fair point.

In practicality, I don't see a need for this .txt file to be present. There
are other .txt files, such as humans.txt, that don't have widespread adoption,
due to lack of generated value.

While there are many issues inherent in reporting vulnerabilities, this .txt
file will not solve them. Not only is the file's existence error-prone from a
maintenance perspective, but my experience has shown me that such a file is
not needed.

Due to these factors, I believe that the existence of such a file is
unwarranted complexity without a corresponding level of benefit.

------
fbnlsr
I think humans.txt ([http://humanstxt.org/](http://humanstxt.org/)) fits more,
and already exists.

~~~
irrational
I've never heard of this. I wonder how many websites implement this. I tried a
bunch of sites, but the only one I could find that had a humans.txt file was
google:

Google is built by a large team of engineers, designers, researchers, robots,
and others in many different sites across the globe. It is updated
continuously, and built with more tools and technologies than we can shake a
stick at. If you'd like to help us out, see google.com/careers.

~~~
JosephRedfern
I do! [https://redfern.me/humans.txt](https://redfern.me/humans.txt)

It should really be human.txt...

~~~
Deimorz
A redirect to LinkedIn definitely isn't a proper implementation.

~~~
JosephRedfern
I agree. It was meant to be a bit... cheeky. Honest question though, is your
issue with the fact that LinkedIn isn't plain-text, or the nature of the
information that LinkedIn provides?

~~~
jwfxpr
Neither. If any of us curl/wget/whatever that file, we gain no usable, plain
text, human information. We then have to, manually, notice the redirect, take
a manual action to follow up, and then those two further objections would
apply. Why implement a text file spec that doesn't serve a text file? Even if
all your text file contained was "Please see
[http://linkedin.com/example”](http://linkedin.com/example”) it would be a
correct implementation.

------
libeclipse
Nice idea, but a little inconsistent. Even their own site doesn't comply with
their own RFC.

~~~
jopsen
Yeah, the RFC seems very vague too.

I could see some value in meta-data for automated reporting of security
issues.

But the spec would have to be less informal.

------
askvictor
So when a site is hacked, the hackers replace the email and public key in this
file with their own (or one that goes to /dev/null), and the site's owners are
never informed when someone else notices the breach and tries to use this file
as intended.

There should either be a well known convention (like security@ as others
hadn't mentioned), or an external public registry of this sort of thing)

~~~
solatic
Because it's a standardized text file in a standard location, it's quite easy
to add a black box monitor of the file and alert if the values change. If
anything, mucking around with the file is the last thing hackers want to do,
because it would tip off the operators that there had been a breach, that the
operators otherwise would not have noticed.

~~~
askvictor
Good points, though those should be spelt out in the original document/site

------
hannob
Whether or not such a file makes sense is another question, but I find the
mentality of the "Disclosure" field quite puzzling. It is not up to an
affected party to decide what disclosure terms apply. After responsibly
disclosing an issue to a site a security bug finder should disclose it to the
public, whether the affected site likes that or not.

------
lifeisstillgood
I seem to remember suggesting jobs.txt as a listing for a company’s open
positions. Someone even made a site promoting the idea.

What I mean to say is that the concept of a domain is much deeper than html,
and we use it so shallowly. The web is distributed and simple if we want it
back.

~~~
Top19
Can you expand more on this? What do you mean by “much deeper than html”?

~~~
libeclipse
He means if you have a domain you can pretty much serve whatever you want off
of it.

~~~
lifeisstillgood
Maybe html was wrong phrase. I mean the distributed nature as typified by
everyone having their own domain - with attached storage. This was the default
back in the dial up days (side thought is that Apple could bring that back -
iCloud is nice as storage and all but how much will it take to go from iCloud
to paulsdomain.com/photojournal - that’s almost a Facebook killer and at some
point is much much easier to distribute)

------
Aaron1011
I can't seem to find a definition of "Partial disclosure" on either the
website or the IETF RFC draft.

~~~
gregmac
Yeah, I was looking for the same. All the draft [1] says is:

    
    
        2.5.  Disclosure:
    
        Specify your disclosure policy.  This directive MUST be a disclosure
        type.  The "Full" value stands for full disclosure, "Partial" for
        partial disclosure and "None" means you do not want to disclose
        reports after the issue has been resolved.  The presence of a
        disclosure field is NOT permission to disclose vulnerabilities and
        explicit permission MUST be saught where possible.
    

In contrast, the actual generator tool on the website uses a URL
([https://example.com/disclosure.html](https://example.com/disclosure.html))
as a placeholder, which doesn't comply with this section.

[1] [https://tools.ietf.org/html/draft-foudil-
securitytxt-00#sect...](https://tools.ietf.org/html/draft-foudil-
securitytxt-00#section-2.5)

~~~
tedunangst
Explicit permission must be saught (sic)? How does that work?

~~~
tptacek
It doesn't even make sense if you assume the terms are defined, because
disclosure obligations are bilateral. If I'm reporting a bug to your site
because I've found a new ImageMagick vulnerability, it is more likely that I
the reporter want an embargo from _you the site operator_ than the other way
around.

------
jph
If you're considering this, please consider responsible disclosure steps. Here
are ones I use with consulting clients.

[https://github.com/joelparkerhenderson/responsible_disclosur...](https://github.com/joelparkerhenderson/responsible_disclosure)

~~~
tptacek
Please consider replacing the term "responsible disclosure" with "coordinated
disclosure", and revise your steps accordingly. Non-coordinated disclosure
isn't necessarily "irresponsible", and the suggestion that it is is frowned
upon among serious testers, which are presumably the ones you want to attract
with disclosure policy.

And it's worth remembering that any kind of disclosure "policy" is a _request
for a favor_ from the researcher, so it's good to word things accordingly. You
wouldn't generally ask for a concession (like honoring an embargo on publicly
reporting a finding you took time to generate) right after also "asking" the
reporter to report "in good faith".

~~~
jph
Done. Thank you for the advice, and all your security work.

[https://github.com/joelparkerhenderson/coordinated_disclosur...](https://github.com/joelparkerhenderson/coordinated_disclosure)

If you have anything more you want in it, please let me know.

~~~
jpswade
Isn't responsible disclosure the aim here? I don't think substituting
coordination for responsible is a sensible strategy for your project.

The coordination is inherit from the fact that they are honouring your
disclosure policy. The word coordinated is redundant.

Objectively, it should be called a "security disclosure" policy.

~~~
jph
You make good points.

How would you improve it to clarify the aim is to be generally useful for both
sides?

For example, when I personally discover a security issue, I want to be able to
report it to a company, and also include a link to this doc, and ask "Here's
how I suggest we interact and why; what do you think?".

------
disconnected
This is just "feel-good" bullshit that doesn't actually solve any real
problem.

In my life, I have only reported two different vulnerabilities to two
different vendors.

One of them didn't care at all. They were transmitting usernames and passwords
in plaintext and I actually showed one of their engineers this, live, in
person, and he just shrugged.

The other one told me something to the effect of "yes, this is very serious.
We'll look into it right away" and actually fixed it... 3 years later.

Your biggest hurdle isn't reporting vulnerabilities. That's the easy part.

The biggest hurdle is getting someone to care.

~~~
subcosmos
>In my life, I have only reported two different vulnerabilities to two
different vendors.

Interesting. I don't think this is for you.

Ever have the task of needing to report a security issue to 10k sites and wish
you could have any hope of automating it?

I certainly haven't. More like 100k!

Don't be so negative when people try to do good.

~~~
manigandham
You needed to contact 100k sites about security? Do you have more details on
this?

The example security.txt for this site currently has a twitter handle as the
contact. How are you going to automate that?

~~~
mzzter
One may want to notify many website owners at once that it’s a good time to
apply a patch in response to a security vulnerability affecting a technology
they are using. Ex: WordPress, node.js, MongoDB, etc.

~~~
manigandham
There are 100s of millions of sites, who’s going to crawl all of them and send
out alerts? How would you know what backend tech is being used?

Major risks and CVEs are already published in the appropriate news channels,
which is far more efficient and effective.

~~~
subcosmos
It should be, but it isn't, and more importantly, there's lots of critical
infrastructure out there that is highly vulnerable.

~~~
manigandham
Who is running critical infrastructure that you cant easily reach right now?
Are you saying you know of a major CVE that has no coverage?

------
kseifried
Suggested for policy inclusion for CNA (CVE Numbering Authority) guidelines.
And yes, I would suggest they remove the disclosure policy, it's not really
all that germane for the vendor, from the security researchers point of view.

Edit: forgot link:
[https://github.com/CVEProject/docs/issues/53](https://github.com/CVEProject/docs/issues/53)

------
campuscodi
Here's my interview with Ed on the IETF proposal:
[https://www.bleepingcomputer.com/news/security/security-
txt-...](https://www.bleepingcomputer.com/news/security/security-txt-standard-
proposed-similar-to-robots-txt/)

------
ejcx
Ive only done security in SaaS in my career. This text file is pretty useleas.
Disclosure is very out of place, and indicates to me that it's pretty much
just for companies with a bug bounty

I'm a bit negative, but I don't want more useless emails from security
researchers. Companies that do security well will adopt this, and are already
easy to get ahold of...

------
vectorEQ
how about just put the right address in whois information??? >.> ... if you
gonna put it on the public internet anyhow might aswell use the age old
mechanism already in place to do this...

~~~
tmdvs
I don't understand why businesses use use private WHOIS services. Surely if
you're a business you'll have public contact information? I was once tasked
with trying to get in touch with a company who's domain name had expired, but
with no website (read: contact page) or real email address in the WHOIS I had
to give up.

------
unixhero
Should have a section for which software license the website code is under

Also for any runtime JavaScripts

And should alsonspecofy how private data is stored

And how private data is collected, processed and for how long it is retained

~~~
cyphar
> Should have a section for which software license the website code is under

LibreJS already has standardised ways for websites to report this, which are
far more flexible and useful.

------
accurrent
I smell an opportunity for spam bots.

------
snakeanus
A bit sad that it does not allow for the file to be signed with OpenPGP.

~~~
jlgaddis
What precludes you from including a comment with a link to the signed version
of the file with ".asc" appended to the filename?

~~~
cyphar
Nothing, but then it's an extension to a spec for a five-line plaintext file.
Seems quite wasteful.

------
stefanwlb
Heh I thought the domain ending was "txt" and was trying to register domain
with ".txt" before I relised that it doesn't exist and that it's
securitytxt.org and not secuirty.txt ! Silly me

------
dreamdu5t
As an operator of many websites... I don't want more emails from "Security
Researchers" who want $100 to tell me the obvious such as, "Login form should
be rate-limited" or "emails can be enumerated using password reset form" and
the like...

Instead, we should have a _central_ vulnerability repository that is open-
source, standardized, public, and transparent. This repository should vet
vulnerability reports _before_ contacting the software owner prior to
disclosure.

~~~
bzbarsky
On the flip side, such a vulnerability repository may not have incentives
aligned with either the bug reporter or the software owner. See
[https://palant.de/2017/10/04/observations-on-managed-bug-
bou...](https://palant.de/2017/10/04/observations-on-managed-bug-bounty-
programs) for a recent post about that sort of problem.

------
bkeroack
The first thing every attacker will do: replace security.txt

~~~
dmix
Attackers don't do things that overtly "change" things... especially something
that is visually apparent on public facing websites...

Changing things is how you get detected.

Much like that recent thread where John Kelly brought his phone in to IT
because it wouldn't update and they found malware. That's an obvious failure
in any "APT" hackers guide.

This isn't like modifying .bash_history to hide malicious behaviour (or worse
removing command auditing entirely when it's a potential company/personal
policy to activate it on all machines).

~~~
jlgaddis
> _Attackers don 't do things that overtly "change" things... especially
> something that is visually apparent on public facing websites..._

Yet, for many, many years, that was probably the most common thing that
attackers (including a much younger, teenaged version of yours truly) did. At
that time, however, "defacing" web sites was done mostly for bragging rights
and without malicious intent. In many (most?) cases, the original index.html
page would be copied/backed up and the "new" web page (created haphazardly
using vi, perhaps) would replace or augment it.

For a long time, there were even mirrors [0] of these defaced web pages.
attrition.org ran one from 1995-2001 and, for much of that time, even ran a
mailing list where they would announce these "defacements" \-- often,
immediately after they occurred (after being tipped off by the attackers). It
was pretty common to be able to view the "still defaced" site while it was
still live (before being taken down and/or restored). As a frame of reference,
attrition.org stopped mirroring these sites in May 2001 [1].

> _... it will no longer mirror the defacements because keeping up with the
> volume and rate of hacks is too much work ..._ ([1])

So, yeah, it happened _A LOT_. It is, of course, a much different world now
(although defacements still occur regularly).

[0]: [http://attrition.org/mirror/](http://attrition.org/mirror/)

[1]:
[https://www.computerworld.com/article/2582627/security0/attr...](https://www.computerworld.com/article/2582627/security0/attrition-
org-stops-mirroring-web-site-defacements.html)

------
peterwwillis
....because security teams have great working relationships with all the the
owners of all the content platforms in an org...

Just make an abuse@ or security@ catch-all for your mail server and point all
your domains' MX there. If you're worried about spam, build in a bounce-back
that asks for a signed email to make it past the bounce. Security researchers
can figure that out.

~~~
jsmthrowaway
> If you're worried about spam, build in a bounce-back

This is really bad advice. _Always_ disable spam filtering outright on abuse@
and postmaster@, _never_ filter them, _never_ bounce _anything_. Read every
one. I'm not saying hook those mailboxes up to automatically make a JIRA case
on each inbound -- there is screening to be done. However, the idea of abuse@
is that spam is often forwarded directly to it if it's originating from you;
if you are catching spam to abuse@ or postmaster@, well, do the math.

abuse@, postmaster@, security@ and friends are discussed in RFC 2142, with a
couple "must" directives that are important:

> However, if a given service is offerred [sp], then the associated mailbox
> name(es) _must_ be supported, _resulting in delivery_ to a recipient

Emphasis mine.

[https://www.ietf.org/rfc/rfc2142.txt](https://www.ietf.org/rfc/rfc2142.txt)

~~~
kuschku
So much this.

I recently ran a port scan of a subset of the internet on a single port. (To
get an estimate how many servers with a certain old protocol version were
still running, as I wanted to drop it from a client I was working on)

I previously messaged the contact email given by the AS of every affected
subnet.

I got over 4 weeks not a single complaint or request to be excluded.

Then I did the scan, and got tons of complaints – and once I pointed out the
email I sent them, most got very quiet, but one continued to complain and tell
me I should have tried harder to contact them.

~~~
jlgaddis
I work for an ISP and mails for abuse@ and friends end up in my mailbox.
Occasionally, there's a "legit" one that I need to look into but the majority
of them are automatically-generated reports that some IP address in our
netblocks port scanned them or something stupid. The first one gets ignored,
the second one usually gets a response from me to the effect of "please stop
sending us these automated notifications", and the third one gets the sending
mail server's IP address added to my local blacklists.

My favorite ones to ignore are from a company who allegedly represents Viacom,
Warner Bros., TNT, etc., keeping me up to date on what our customers are
torrenting.

