
How not to sign a JSON object - LaSombra
https://latacora.micro.blog/2019/07/24/how-not-to.html
======
andrewstuart2
An easy enough mantra: sign bytes, not semantics. I've trudged through the
bowels of SAML long enough to know that canonicalization is thoroughly tricky
and annoying to deal with.

I don't care what you throw in the bytes; I can figure that out easily enough.
I want to know who generated those bytes and that they haven't changed.

For this reason also, I think the "reach for symmetric first" is bad advice.
The primary benefit I can think of for HMAC is if you want to store client-
side state for more stateless browser-targeted services. It makes it easy to
throw an obscure cookie to your client and then validate it on the way back.
It seriously complicates things if you want to allow other people to validate
provenance of those bytes. With symmetric crypto, as soon as you can validate
you can also generate. That's not the system we want.

~~~
lvh
You say that's not the system you want but that's based on a pretty big "if":
you need "someone else" to validate something, and those people can't talk to
you directly and ask you if it's valid or not. That's not in practice how we
see this being used.

If you do need third parties to validate, a much simpler API is to just ask
the thing that holds the HMAC credential over TLS, and have it return
true/false. That makes it harder to demonstrate non-repudiation, but I'm not
convinced that's a property you generally care about. Even in OIDC (which
mandates a JWT, but for reasons that defy understanding doesn't mandate
cryptographic domain separation between (IdP, RP) pairs), leading OIDC
providers have long recommended you just talk to them over TLS to get
userinfo. (GSuite has recently muddled this in their docs, which irks me.)

(Disclaimer: I'm the author.)

~~~
DenisM
> holds the HMAC credential over TLS, and have it return true/false

And if you're connecting to the original party anyway might as well not even
use HMAC or any crypto at all. When "signed" data needs to be sent to the
client don't send it at all, instead store the data locally under a GUID and
send that GUID to the client. When client takes their GUID to another party,
that other party connects to the original party and retrieves perfectly
authentic data. Free bonus - faster client performance.

Best crypto is absence of crypto.

~~~
gritzko
This multi step interaction has some spectacular failure scenarios.

~~~
DenisM
> spectacular failure scenarios

Such as? Any of them different than that of the suggestion I was replying to?

------
nemo1618
I recall this plaguing the Secure Scuttlebutt project -- they were signing
JSON objects that were encoded however Node saw fit to encode them, which
meant that in order to write an alternative implementation of the protocol,
you had to encode all the various objects _exactly_ how Node encodes them,
which is (unsurprisingly) non-trivial. I wonder if they addressed this in a
later version.

~~~
tracker1
If you're serializing the JSON, that is UTF8. You're signing UTF8 bytes...
don't mess with them, and the signature will be the same... also, for JWT, the
UTF8 is then converted to base64 representation and tethered to the signature.
You're signing the UTF8 bytes, not JSON. It doesn't matter how it's
serialized, if it isn't UTF8, you're doing it wrong. The order or properties
doesn't matter, the signature is on the bytes.

~~~
Vendan
SSB signs objects like so:

    
    
        {
          "previous": "%XphMUkWQtomKjXQvFGfsGYpt69sgEY7Y4Vou9cEuJho=.sha256",
          "author": "@FCX/tsDLpubCPKKfIrw4gc+SQkHcaD17s7GI6i/ziWY=.ed25519",
          "sequence": 2,
          "timestamp": 1514517078157,
          "hash": "sha256",
          "content": {
            "type": "post",
            "text": "Second post!"
          }
        }
    

gets signed like

    
    
        {
          "previous": "%XphMUkWQtomKjXQvFGfsGYpt69sgEY7Y4Vou9cEuJho=.sha256",
          "author": "@FCX/tsDLpubCPKKfIrw4gc+SQkHcaD17s7GI6i/ziWY=.ed25519",
          "sequence": 2,
          "timestamp": 1514517078157,
          "hash": "sha256",
          "content": {
            "type": "post",
            "text": "Second post!"
          },
          "signature": "z7W1ERg9UYZjNfE72ZwEuJF79khG+eOHWFp6iF+KLuSrw8Lqa6
                        IousK4cCn9T5qFa8E14GVek4cAMmMbjqDnAg==.sig.ed25519"
        }
    

which means that you have to be careful about how you remove the signature in
order to verify the original. The "main" node implementation does this by
parsing the json, removing the field, and then re serializing, forcing any
alternate implementation to exactly match the node serialization in order to
be compatible

~~~
ThrustVectoring
I'd be really curious as to why they chose not to go with something like

    
    
        {
          "data": {...}
          "signature": "z7W1ER..."
        }

~~~
Vendan
as far as I can tell, it's cause they are hardcore JS devs, where JSON is seen
as almost a part of the language, and little to no effort was put into things
like future-proofing. A lot of the failings have been fixed as the community
grows, but this one is core to how the whole thing works, so changing it at
this point is fairly non-trivial (would basically involve completely breaking
any kind of backwards compat, and potentially even some forwards compat)

------
notJim
I'm sort of skeptical of the idea that a bunch of people hand-rolling HMAC
stuff is going to contain fewer security issues in the long term than widely-
used, tested JWT libraries. Sure if they do exactly what you say here, and
nothing else, that might work. But what happens when they realize they need
some minor tweak to it…

~~~
tracker1
Beyond that, hand-rolling JWT isn't so hard at all. When I read about some of
the exploits in certain libraries, I didn't have to worry about it, as I only
supported the known authority's public key in the system I was working on.

Of course, that makes the header effectively useless in practice.

~~~
tptacek
There are two obvious problems with this approach to engineering:

1\. You don't know what you don't know, and there is _a lot_ to know about
cryptography beyond the minimum needed to interoperate with other systems.

2\. If you're engineering seriously, you know people are going to inherit your
code and your design down the road, and if you're relying solely on a minimal
feature set without a coherent, informed design, those people are building on
sand.

Rolling your own JWT is a bad plan.

~~~
macspoofing
It's not like he is rolling out his own crypto library. He's just rolling his
own token but encrypted/signed with industry standard crypto.

It's not terrible to "roll your own JWT" if you don't actually care about
interoperability. And he's right, it does sidestep a lot of issues because JWT
and corresponding libraries are designed to handle far more use-cases than
what he may need it for and therefore if you don't fully understand it all,
you may be shipping with unsecure configuration.

------
timemachine
A fun read and primer on the subject. However, I would like to see more
emphasis on including a time stamp as part of the signed payload. The reason
is multifold: (a) Most importantly prevents ‘replay attacks’ [1]. Systems that
need to send send the same command will do so with a unique time stamp and,
therefore, a unique signature. (b) The time stamp + signature is an
idempotency key. Your key cache’s TTL is the same as the timeout for your time
stamp rejections. (c) To a lesser extent bolsters the system against a brute
force length extension attack (see the Flickr API flaw in the reading) by
reducing the time the middle-man has to correctly calculate the glue bytes.

1:
[https://en.wikipedia.org/wiki/Replay_attack](https://en.wikipedia.org/wiki/Replay_attack)

------
dev_dull
It's amazing to me that the whole article glosses over issues around key
storage for symmetrical encryption, not to mention now you have an O(n)
liability issue with all of your readers.

~~~
geofft
I think the scenario they're envisioning is one where the server generates an
object, and some time later, wants to make sure it's getting back an object
that the server has previously generated. HTTP cookies are a good example of
this. Anything where you'd reach for JWT is probably also this use case.

> _his post is mostly about authenticating consumers to an API. ... you’re
> trying to differentiate between a legitimate user and an attacker, usually
> by getting the legitimate user to prove that they know a credential that the
> attacker doesn’t._

The recipient of the API key doesn't need to verify their object. There's no
attack from being able to give someone a fake API key - any attacker in a
position to modify the API key in transit, which would just be a DoS, is also
in a position to drop the connection, which is also a DoS. Such an attacker is
probably also in a position to _steal_ the API key silently, which is a bigger
problem. (If a client is really curious whether they have a valid API key,
they can just make an API call with it and see if it works, they still don't
need to actually check the signature.)

~~~
closeparen
I don’t think that’s a fair assumption. Google IAP puts JWTs in headers to
attest the end user’s identity in a way you can’t just forge from inside the
firewall. Other use cases have you obtain a JWT from a dedicated auth service,
maybe even a 3rd party provider, so that other random services don’t have to
know about passwords and 2FA. In both cases it would be prohibitively
expensive to call back to the auth service to verify each token. The point of
using a standard like JWT is to make it intelligible in other codebases,
otherwise you could make your own format and serialization.

------
mixedCase
> Anyway, no, because you need to parse a header to read the JWT, so you
> inherit all of the problems that stem from that.

Can anyone enlighten me what point is the author trying to make? JWT is pretty
damn standard so it's my go-to for signing objects.

~~~
lvh
We should write the "The JWT Problem" post one day, but this post isn't that,
so I really don't want to derail the discussion.

The short version is that there are flaws in the JWT specification that make
certain bugs likely. A classic example of the "you have to parse a header to
use the JWT" problem is the HS256 vs RS256 confusion bug, where your JWT
library would interpret an allegedly-HS256 (HMAC) JWT using RS256 (RSA) key
material. The JWT would get validated using the public key of the RSA pair,
interpreted as an HMAC key. But the public key is, you know, public! So the
impact of the bug is that everyone can forge JWTs. That is not a problem that
can happen in well-designed schemes.

We do have a blog post from last year that tells you what we think you should
do if you want to be safe and you know what kind of abstract thing (e.g.
signing, MACing, etc) you need:
[https://latacora.micro.blog/2018/04/03/cryptographic-
right-a...](https://latacora.micro.blog/2018/04/03/cryptographic-right-
answers.html)

(Disclaimer: I'm the author.)

~~~
random023987
Assuming a JWT implementation accepts only a fixed header (all header fields
must be present and match, no additional fields can be present), are there any
other issues with "just use jwt"?

~~~
floo
I do like using JWT. But its point is to offer flexibility. If you fix the
entire header i.e. use a single signature method, you might just as well
concat that signature directly.

In other words if you stop utilizing JWT, you won’t have JWT specific
problems.

------
forgotmypw3
> Canonicalization is a quagnet, which is a term of art in vulnerability
> research meaning quagmire and vulnerability magnet. You can tell it’s bad
> just by how hard it is to type ‘canonicalization’.

The funniest line from this article and my word of the day.

~~~
lvh
That one is all 'tptacek.

------
beefhash
I'd go one step further: If you're in the situation that allows for HMAC over
asymmetric signatures, you might as well go all the way with authenticated
encryption instead of purely a MAC. Even if you have TLS, you still gain a
minor benefit of having black box tokens to everything outside the system, in
particular if you have to let untrusted or semi-trusted clients hold on to
signed data that they needn't know the contents of. Yes, security through
obscurity is not sufficient on its own, but can help stall analysis and
exploitation efforts,

~~~
lvh
I don't think that addresses the problem set out in the post. Other, non-
cooperative third-party systems need to be able to parse the JSON blob as is.

------
dmitrygr
> Canonicalization is fiendishly difficult.

1000 times this! If you think you need canonicalization, always remember: "no
you do not!" It is not a hill you want to die on.

~~~
neilv
While integrating with a large company's SSO protocol (with a cleanroom
implementation), I found that the off-the-shelf "standard" XML DSIG
canonicalization code they were using (from a major vendor) actually was not
compliant with the W3C spec. That was unpleasant to have to discover and then
explain.

------
baybal2
More about use case than how to do it: client side certificate authentication
is not a rocket science.

Unless you 100% need to use signing for the use case like a client side ACL,
there is genuinely no need to overengineer your web app with your own
authentication scheme.

But if your scale already mandates doing operations without server side ACL
lookup for performance reasons, doing it over HTTP and web stack might already
be inefficient by itself for the task.

~~~
geofft
Client certs are great if you control the client. If, say, Slack told me that
to use the Slack API I'd need to get a client cert, I would laugh.

One thing that comes to mind is that either you have to check revocation on
the server side, or you have to re-provision client certs frequently, and in
most client libraries that's actually difficult / annoying. If the server just
sends me an updated token, I can just put that in a local variable and call it
done.

There's a reason most APIs have moved to "you can just put the token in the
request payload". It's hard enough to ask people to set HTTP headers, asking
them to set client certs will be super complicated.

~~~
baybal2
> in most client libraries that's actually difficult / annoying.

You can add a client cert with just 2 clicks on Windows. To my experience,
that's way easier than most API authentication schemes

~~~
geofft
One, on Mac and Linux it's more involved.

Two, I'm talking about client _libraries_ , not web browsers. Every client
library (including those used by web browsers) is perfectly capable of passing
a query parameter. Most can pass cookies or custom headers. Not all of them
can pass client certs.

------
kerblang
> Maybe you don’t need request signing? A bearer token header is fine

If you're going to insist on symmetric key signatures, yeah. Otherwise a
symmetric signature would be the same as using symmetric encryption to store
user passwords, wouldn't it? You have to have the secret key to verify the
user's signature.

~~~
lvh
I don't understand your analogy. Can you elaborate? I don't see the parallel
between (a)symmetric signatures and encrypting passwords.

~~~
kerblang
The article appears to say that you must sign your requests with a symmetric
key. To verify the request, the server needs the same key, right? So the
server needs a copy of every single user's encryption key.

The server _could_ just store the keys plaintext in a database but I'm
assuming we can agree that's a horrible idea. The best it can do is encrypt
them symmetrically before storing them, using an extra-special secret key that
needs to be protected very carefully.

With username/password authentication, we don't store symmetrically encrypted
passwords, do we? We store a one-way hash of the passwords instead, because
symmetric is deemed not-good-enough.

~~~
lvh
The server is trying to validate that the thing it gets back is the thing it
previously sent out, so that's a different model. What concretely are you
suggesting we store as if it was a password? The serialized JSON object?

~~~
kerblang
I'm beginning to think you're trolling, but... I'll make one last attempt to
explain: How does the server verify the signature? Does it just wave a magic
wand? Doesn't it need the same key the user used to sign the request?

~~~
lvh
As I mentioned in that comment: the thing sending is also the thing receiving,
just at a later point. So "the same key the user used to sign the request"
doesn't seem to apply.

~~~
seandougall
It seems like you're describing a specific use case that is different from
what most readers think of when they see a post about cryptographic
signatures.

If the server generates the token, and all it's effectively doing is verifying
that the client has the correct token, then what is gained by making that
token be a signature of any sort?

~~~
geofft
You no longer have O(n) server-side storage proportional to your number of
clients, the way you would if you put each token in a database. You also no
longer need to worry about consistency + availability of that database if you
have multiple web servers. If one web server in your cluster generates a
signed token, and another one verifies it, they have the same key (they have
exactly one key, among all of them) and it works without any communication or
storage between the web servers.

~~~
seandougall
That's fair, although it doesn't allow tokens to be revoked or to expire. For
that you'd need to store at least some unique part of the payload you're
signing, and then you have O(n) storage requirements again, right?

~~~
kaoD
> or to expire

Add an expiry time to your signed payload. Tokens can have metadata.

> If the server generates the token, and all it's effectively doing is
> verifying that the client has the correct token

I think you might be understanding "token" as the API tokens that are just a
random bunch of bytes that are later matched to authenticate like they were a
password (let's call them "passtokens", I don't know if they have a name).

Tokens can be far broader than that. For example, JWTs contain arbitrary data.
Verifying that the token is valid is just verifying its signature. You
wouldn't check that the passtoken matches (in fact, you wouldn't have a
passtoken at all). The payload is where the sauce is at.

A payload can include anything, from an user id (which is similar to the
passtoken use case, authentication) to a list of grants (so you wouldn't even
have to hit the "users" table to check for permissions... or even have access
to it! As long as you can verify the signature.)

> to be revoked

That's true though. This is usually handled with short-lived tokens that must
be renewed periodically.

Alternatively you could have a Token Revocation List of some sort (which isn't
O(n) storage since 1. not all tokens will be revoked and 2. you can purge
expired tokens). But then you get the problem of synchronizing the TRL across
services (or centralize the token verification in a service which IMHO kinda
defeats the purpose).

~~~
geofft
The way I'd approach this is to treat "revocation" as meaning "after this
user's current tokens expire, they can't refresh them," and make the token
lifetime short enough that you're comfortable with it. Document that any API
call can also include a "new_token": {...} field, and the client should
update. On the server side, only try to refresh the token when it's close to
expiring, and don't fail the API call if you can't reach the revocation server
(just skip refreshing).

You still have a central server to track account status, but now it can be
more like "a text file with a list of usernames on a single box running
Apache, if it crashes we reboot it" and less like "a distributed, high
performance, highly-available in-memory K/V store that's in the critical path
for every request," which is going to make you a lot happier operationally.

Or you can push the list of usernames to revoke to each server, or something.

------
mikepurvis
It's not JSON, but I had a scenario like this where I wanted an in-band
checksum on an archive of files. In the end it was indeed a file with the
signature included in the archive itself, and the formula for computing it was
basically the shasum of the shasums of `file .` sorted alphabetically, with
the signature file itself excluded.

That worked out just fine, but I can see the argument that it's much harder to
get to a canonical JSON representation than it is to get to a canonical "tree
of files" representation. Indeed, it was easy enough in my case that the repo
contained a shell script one-liner that would compute it, and that was the
reference against which the "real" python implementation was validated.

~~~
lvh
That sounds like you found an alternative format where finding something
canonical to sign is easy (files, the shasum CLI) and punted on canonicalizing
the in-band signature (which is good!).

------
lvh
A super-awesome related project I didn't get a chance to work into this post
(because it's entirely deserving of its own):
[https://github.com/benlaurie/objecthash](https://github.com/benlaurie/objecthash)

To be clear: I think that's a niche use case and while I think ObjectHash does
a great job of exploring it, I don't expect the median startup to need an
ObjectHash implementation.

(Disclaimer: I'm the author.)

~~~
tialaramex
Why is it _better_ to do tricky message processing before we achieve
confidence that the message is legitimate/ genuine?

Or is the argument that even though this is worse it's so useful we might
really want to do it anyway?

~~~
lvh
Definitely authenticate first!

If there's anything I said that made you think otherwise, let me know: I would
like to amend that so no-one else thinks I could possibly mean that. The
initially recommended (unless you can do otherwise) approach in the blog post
is clearly "tag at the end" and every other approach also validates first. If
you're referring to ObjectHash: like I said, it's a very niche application, I
don't expect people to use it, and yeah, it enables new use cases.

(I expect you'd still really be authenticating the ObjectHash somehow -- e.g.
by sending it over TLS -- but that's out of scope for ObjectHash itself.)

~~~
tialaramex
Yes, I was referring to ObjectHash of course, since it's in the sub-thread
about ObjectHash.

"I don't expect people to use it" just seems like the sort of awful excuse
you'd usually be jumping on people for. It's like someone built a github
project with a bunch of crypto red flags to check whether their new "Search
github for projects with crypto red flags" idea works.

Don't get me wrong, it's clever, and I like clever. But I have learned in
cryptography to only accept clever when it is clearly in the service of a
specific pre-identified goal, and not just for its own sake. Isn't that
normally a philosophy you'd subscribe to? What's the _pre-identified goal_ for
this thing?

While we're here, another red flag. Mentioning Certificate Transparency as a
model for some other X Transparency. Certificate Transparency isn't a model
for anything. People have been saying to themselves almost from the dawn of CT
"Oooh, this is clever, I should do the same for X" and it's always a bad idea.
Someone might need a Merkle Tree. I'd argue they shouldn't use ObjectHash
anyway. But the chance they need all the other paraphernalia from CT?
Basically non-existent.

~~~
lvh
> "I don't expect people to use it" just seems like the sort of awful excuse
> you'd usually be jumping on people for.

I think it was pretty clear, by calling it a specific niche that ObjectHash
does a good job of exploring, that I am not making a recommendation.

------
eridius
The "regex bait and switch trick" doesn't work if any of the intermediate
processors might decode and re-encode the JSON.

~~~
lvh
That's an excellent point, you're of course right and the blog post could do a
better job explaining that. I'll amend.

------
EGreg
Seems that you just need to go ahead and build a canonical JSON encoder in JS.
It would be slower but hey, you need the same consistent algorithm for
signing. The encoder would be part of the spec.

~~~
tracker1
JSON is UTF8, it's already canonical in that regard, if you don't f*ck with
the serialized bytes tethered to the signature, you'll be absolutely fine.

~~~
lvh
Not in the sense that the term of art "canonical", as in canonicalization, is
used. UTF-8 is not canonical: different byte sequences can map to identical
meanings, as covered in the post.

~~~
tastroder
I'm really confused by some of the assumptions in this whole thread, sorry. In
what scenario would a client touch the string it authenticates and parsed JSON
from before sending it back to the server later? This argument seems to assume
that I have to throw away the string I've parsed or somehow reconstruct the
same JSON and create the HMAC myself locally, which seems odd.

~~~
lvh
I think I may have answered your confusion in a different thread (I'm not sure
I parsed your comment correctly though), but: this is about a problem that
specifically occurs when you have JSON (or some other structured format) that
needs to have the signature in-band. You're right that it's way easier (the
first list of three bullet points, as you mentioned inthe other comment
thread) if you can just shiv the tag on the outside.

Perhaps a more familiar case where this happens is SAML assertions with inline
signatures?

~~~
tastroder
Yeah, thanks, since you likely encountered these scenarios I guess we're just
looking at it from very different viewpoints and some of mine might be lost in
translation.

Luckily the number of times I've had to invent signing schemes or even
integrate SAML is limited. :)

------
Boulth
Looks good. I'm missing ACME request signing in comparison.

------
tracker1
I actually prefer JWT with an asymmetric key. Anyone can confirm the payload's
providence. HTTPS takes care of encryption for the payload. Of course with
JWT, only allow trusted keys or signed keys from a trusted CA.

There have been some poor implementations, but the method is pretty sound.

------
HONEST_ANNIE
Howto prove you are a God:

Create a json file

    
    
        {"msg":"I am a God. My name is Bob.",
         "sha256":"78A873E..."}
    

where the hash is the checksum of the file including the checksum. It would be
equal to finding fixed point in cryptographic hash function that happens to be
checksum to your message.

~~~
geofft
That's pretty easy to do with MD5 :)
[https://twitter.com/i/moments/838685002703466497](https://twitter.com/i/moments/838685002703466497)

------
jiveturkey
A fatally flawed post that dooms it to the category of SEO rather than what
should be useful information. I did especially like:

> Canonicalization is a quagnet, which is a term of art in vulnerability
> research meaning quagmire and vulnerability magnet. You can tell it’s bad
> just by how hard it is to type ‘canonicalization’.

The flaw?

There's nothing here not already well known, so this isn't an insightful piece
for those already well versed. Therefore, this is a piece written for those
that aren't so practiced, for the sake of discussion: junior dev or devops, or
non-security devs.

The content itself _is_ a good discussion, and digestible (lol).

But, as a piece that junior folks are expected to get a takeaway from, the
introduction is a disaster:

> This post is mostly about authenticating consumers to an API.

ie, not service-to-service auth.

> Unless you have a good reason why you need an (asymmetric) signature, you
> want a MAC.

A MAC/HMAC requires the signer and all verifiers to have the key. As stated
just prior, this is about "frontend" signing. A novice reader might not
realize they have to guard the key very well, and might even send it to the
client browser. "Unless you have a good reason" is not a sufficiently strong
warning for a post that is written as an instructable, more or less.

More architectural introduction (bonus with diagrams) is required. As is, this
post is a footgun.

~~~
tastroder
Not sure why you are downvoted, about 10% of the article addresses it's
headline and I frankly do not get the point either. The proposed solution is
in the first numbered list and between that and the conclusion list is a bunch
of prose on non-JSON payload.

The only slightly JSON related content in there is a constructed scenario for
'in-band' signatures (the regex thing) which can just as well be achieved by a
bit of string processing. Any JSON object will start with {, end with } and
have some more information between it. Replacing the initial curly brace with
'{"hmac": "foo",' gives you a valid JSON document. You can remove that easily
before parsing and place no restrictions on the object's keys. You can handle
edge cases like JSON literals or arrays by wrapping the whole thing in
{"hmac": "foo", "payload": yourstring} if you feel like it.

~~~
lvh
> Not sure why you are downvoted, about 10% of the article addresses it's
> headline and I frankly do not get the point either. The proposed solution is
> in the first numbered list and between that and the conclusion list is a
> bunch of prose on non-JSON payload.

The proposed solution is not in the numbered list. The numbered list describes
how to sign a JSON blob from the outside. The rest of the doucment describes
what you do if that's not an option, and you need to sign the blob in-line.
The very next paragraph after said numbered list describes how to do that.

(I'm the author.)

~~~
pvg
_(I 'm the author.)_

Whee, I can stop rage-typing 'how not to self-attribute your comment about a
thing you wrote' now!

~~~
lvh
Huh, sorry, I'm confused. What should I (not) be doing? I thought not
disclosing being the author of the linked thing was poor form.

~~~
pvg
Oh it's nitpicker than that - you don't have to 'disclaim' being the author
cause, you're, you know, the author and can just say you are.

~~~
lvh
Ah! So you're saying I should say "Full disclosure" and not "Disclaimer"? (Or
nothing at all, as I do here, I suppose :-)) Yes, you're absolutely right and
I'll try to do better in the future :)

~~~
pvg
'Disclosure' is something journalists use to point out a potential external
conflict/relationship the reader might unaware of. But you don't need it to
point out authorship either. You're really saying 'I want you to be aware I
wrote the thing we are discussing' not 'Please don't think I'm sockpuppetting
for myself'.

This is a bit like an even more persnickety version 'nation state' in that the
trivial fix is just dropping the pointless ceremony.

