Hacker News new | comments | show | ask | jobs | submit login
JSON-Based Universal Messaging Format (2017) (github.com)
60 points by cjus 39 days ago | hide | past | web | favorite | 54 comments



> Note that there is a complex canonicalisation procedure for the JSON object, and that the sender must mutate the signed object;

This is a big no-no and actual source of vulnerabilities. If you sign something, the signature goes around what you want to sign.[1] Doing "in-line" signatures is excessively more complex and error-prone. The easiest and most secure scheme is actually "sign a blob of bytes", i.e. signing a packed representation of a message. That way, you get zero ambiguity issues as far as signature-content interactions go [2], and you don't actually need a canonicalized message representation any more (which is not a common feature of serialization formats outside ASN.1 encodings).

There might be other reasons to not use UMF, but this one is completely sufficient to avoid it.

(Also calling HMAC tags "signatures" is confusing as heck and should be avoided.)

(Also the actual method of how the MAC is calculated is not specified; so clearly UMF is not a format, it is a meta-format.)

[1] Even JWT got that right.

[2] Context ambiguity AKA The Horton Principle remains, because that's not something a format solves.


Thanks for this. The need for canonical JSON is perhaps the best reason for dropping the signature field from future versions of the spec. However, because only the `to` `from` and `body` fields are required there's no need avoid the format - just don't use the signature field unless the body field contains a single field with serialized data. Certainly, that use would need to be clearly documented.

Again your point is valid and will likely result in the depreciation of the signature field.

Thanks for taking the time to offer feedback.


Would you be able to elaborate on this?

Why is doing "in-line" signatures a worse design or a source of vulnerabilities? Are there any benefits for providing an in-line signature?

Any examples or additional information would be appreciated. Trying to better understand the issue at hand.


You have to completely parse the message to extract the signature and then re-serialize the message before you're able to validate the message. Consider a situation where you have a defect in your parser:

- in-line signature: you're applying your parser and serializer to the untrusted body of the message, and then verifying the signature. If this is a malicious payload, you've just run it through your parser and serialzer.

- out-of-message signature: you have the full signature and can verify the message without running a potentially-malicious message through anything other than your signature-verification code.


Option1: json.inlineSig: '{ "a": 1, "b": 2, "signature": "ff1341234..." }'

Option2: json.outOfBandSig: '?????'

Option2: json.signature: 'File=json.outOfBandSig; Signature=ff12341234...'

Basically if you try and do option #1 you actually need to parse the content, and THEN find out it's untrusted (which means you need to _execute_ your parser on the potentially unknown / hostile bytes), and then pretend you never processed them in the first place (discard) unknown / hostile bytes.

If you do option #2 then you blindly process the bytes with the signature algorithm, verify they are trusted and THEN run your parser on bytes of a signed / known origin.

Compare:

signedParseInlineSig( '{ "a": 1, "signature": "<<INVALID>>" }' );

signedParseOutOfBandSig( '{ "a": 1 }', "<<INVALID>>" );

...with #1 you have to run isValid( input, JSON.parse(input).signature )

...with #2 you run isValid( input, signature ) && JSON.parse( input )


> Basically if you try and do option #1 you actually need to parse the content, and THEN find out it's untrusted (which means you need to _execute_ your parser on the potentially unknown / hostile bytes), and then pretend you never processed them in the first place (discard) unknown / hostile bytes.

And you need to remove the signature and reassemble the modified data structure back to bytes in EXACTLY the same way as the signer did. This is more work (for larger data structures) and way harder to get right.

Re-normalization of the message also has some other issues, e.g. you need to make sure that you are parsing and processing the re-assembled version (what the signature was checked against), not the message you received; otherwise your signature might be completely useless (think about an attacker inserting duplicate keys: the re-normalization might remove them, but your parser might normally not. Signature validates, but you're not processing what was signed! Oops.)

If you do this the best case scenario is that it kinda seems to work, and if you're lucky it's even secure, but it actually doesn't work or silently stops working for some messages after you update a parser somewhere in the system, because suddenly they disagree about some edge case, and your system breaks.


+1e6

Never design a protocol where you must re-encode (and canonicalize! ouch!) in order to verify signatures. Instead you should wrap the thing to sign (and the signature) as an octet string. E.g.,

    {"thing-to-sign":"<base64-encoded-thing>",
     "signer-info":...,
     "signature":"<base64-encoded-signature-of-the-base64-encoded-thing>"}
This basically kills any joy of using JSON...

This, for example, does not work:

    {"thing":<thing-object>,
     "signer-info":...,
     "signature":"<base64-encoded-signature-of-thing>"}
because you'd have to have a JSON text parser that lets you get at the as-originally-encoded <thing> part of the above JSON text. This is not a common JSON text parser feature! So implementors would tend to re-encode <thing-object> in order to verify the signature.

This also doesn't work, for the same reason and because it's even harder to deal with a signature that's in the middle of the <thing>:

    {"thing-field0":...,
     "thing-field1":...,
     "signer-info":...,
     "signature":"<base64-encoded-signature-of-the-base64-encoded-thing>",
     "thing-fieldN":...}
Even if you promise and keep the promise to put the signature fields first or last, it's still super difficult to make this work well for other implementors, and is difficult even for yourself: you'll probably end up writing a new JSON parser from scratch to deal with this mess without having to re-encode, and most likely you'll opt to re-encode.

Re-encoding for signature verification requires canonicalization. For JSON canonicalization means:

- you must specify object key ("name") order - you must specify what if any interstitial whitespace to have - you must specify a canonical string representation (e.g., all Unicode escaped, or all Unicode not escaped, ...) - you must specify a canonical number representation (oof!)

Numbers make canonicalization really tricky! You'd better limit yourself to 52-bit signed integers. If you use real numbers you'll quickly get into an IEE754 mess.

But again, no one will think this is useful:

    {"thing-to-sign":"<base64-encoded-thing>",
     "signer-info":...,
     "signature":"<base64-encoded-signature-of-the-base64-encoded-thing>"}
if the whole point of using JSON was to make this sort of thing close to human readable.

Now, of course, it's trivial to write some jq code to decode the <thing> and pretty-print it, so it's not the end of the world. But still, people will resist this approach and we'll be right back to defining a canonical JSON encoding.


Maybe someone can comment, but I can't understand the value of this.

The meat of this proposal is the specification of the envelope. But it consists largely of things you would have to decide how to map to your application domain. Since there's no reason different apps would/could map these things the same way, there's no opportunity for interop created here.

I guess it ends up as an idea or example of how you might express a generic message in JSON. So that's of some use.


Agreed. There clearly is a decent amount of work put into this but it hasn't been thought through. And I'm not even sure that what is there is well specified; for example, you can have priorities of 1-10 and we are told that 1 is the lowest and 10 is the highest. We are also allowed to instead use the words 'low' 'normal' and 'high' for priority but we are never told how those relate to the numeric priorities (e.g., is '10' higher than 'high'?)


The spec describes a message format and isn't intended to be a formal Internet RFC. As such, the thought was to leave definitions up to users rather than dictate things like priority ranges.


I agree. There is already a universal message format and it's called plain text. Wrapping that in JSON brings nothing to the table except very slightly slower parsing.

Web folks love JSON and rightly so, it solves many things in JS-land. It is not a useful message format if you are interested in high efficiency or small message sizes. In those cases fully binary messages are the way to go, of course.

If there were two things I would tell the web devs at my employer, they would be these: There is no universal message format that does not already exist, and web tech doesn't solve non-web problems better than non-web tech.

Bonus third thing: JavaScript rots your mind. Avoid it.


> Bonus third thing: JavaScript rots your mind. Avoid it.

Could you elaborate on your hyperbole? What exactly "rots the brain"? Is Javascript deficient in some fundamental way that irritates you? Or do you merely object to its 20-something year history of use on websites?

Javascript is the only language that browsers support. Would you prefer if browsers historically supported only Python instead of Javascript?

Expressing only emotions toward a tool is inefficient and hinders improvement of the tool. Please help the conversation by identifying specific issues or failings.


It's just a poorly designed language that teaches the wrong things. Once the things it teaches are learned they are difficult to unlearn. You forget how to do things without JavaScript when you write JavaScript for a while. You forget how to do much of anything that is not JavaScript.


> * Once the things it teaches are learned they are difficult to unlearn.*

Could you describe one of these "things it teaches"? Is it a concrete concept like ternary operators? Having gone through Excel, VB briefly, and Powershell, Javascript was my first exposure to ternary.


I'm not sure we agree on much, because plain text, by itself, is just an encoding. A message format also needs a syntax and a parser both of which you get with JSON, if you can live with its gaps. (You also need a schema and typically end up needing things like query APIs, storage, queues, etc., which JSON doesn't get you but which is often supported by a wide range of solutions that do.)


> Bonus third thing: JavaScript rots your mind. Avoid it.

Grow up you child


Please keep discussion civil.


> UMF is being used in IoT (Internet of Things) applications where message sizes need to remain small. To allow UMF to be used in resource contained environments a short form of UMF is offered.

Then offer it in Protobuf :-) In general, I don't understand where this would be used instead of an app's specific envelope. The use cases should be specified better because it's really easy for any app to just send their own preferred JSON format without following this.

Tangentially related, I similarly tried to make a messaging format (but with more bells and whistles and more about the transfer, storage, permissions, etc). The proto files are at [0] and the messaging platform is still under active development.

0 - https://github.com/cretz/yukup/tree/0cc926f98d01fba64b818383...


A few issues I have with UMF.

It's missing a mission statement, or use case.

The intro says it's used to avoid inconsistent formatting, but there exists other message formats.

The body field is application specific, which introduces an area for inconsistent formatting and reduces the interoperability, or universalness, of the message format.

The destination doesn't need to be in the message, most of the time the destination is implicit in where the message is sent. (If I send a UMF over email the destination would be in the email metadata and the UMF metadata)

The total amount of metadata that would be included in a UMF message sent over TCP/IP would be silly.

There are arbitrary and undefended specification decisions. The transport protocol is left ambiguous, but both the schema and the format are specified. The metadata around body is very standardized, but anything goes inside body, which is the primary payload.


+1 for mission statement, use case.

The point about inconsistent formatting is that by aligning a team(s) on how a message is formatted groups can avoid the introduction of multiple message formats between distributed services.

It's true the body field does introduce inconsistency and that's left for the application developers to resolve. The envelope fields are intended to be used in queuing and routing situations.

I disagree with "The destination doesn't need to be in the message". What about the use case where a message is forwarded or moves through proxy services?

+1 for metadata being larger than the payload :-D - Can't debate that. The only required fields are 'to', 'from' and 'body' and the short form of UMF can be used. Still, we've encountered situations where that metadata is still larger than the payload. But that doesn't completely invalidate the presence of the envelope in a distributed application.

Thanks, I really appreciate the time you took to offer feedback!


Don't fields need to be sorted in order for signatures to be reproducible with JSON objects? there's no mention of that.


Lots of canonicalization required, and it's quite easy to run into validation bugs across implementations (does your $LANGOFCHOICE serialize JSON exactly the same as the sender's?)


My dude should quit his architect positions.

Consistent hashing of messages requires that every application which handles the message will lay the fields out in the same manner, this is a recursive problem for every layer that will put its data in the UMF. This right out the gate creates caching complexity issues.

Also for being so no cache control options by default.

Also there isn’t an error handling/logging field so the responder will have to do something out-of-band to acknowledge errors.

Also there’s no mention of compression or alternative message formats, XML, proto, etc.

The sender field should likely be a URI encoded data. Then which mode (of several) which could get the message is an infrastructure not an encoding program. Getting this deep here is kind of bad, same with the TTL. If we are going this deep... why no cache control/consistent hashing/compressing?

If you care about strictness or correctness you should use RFC3339 instead of ISO8601. Also not including an implicit UnixTime/UnixTime Nano seconds is a poor choice.

Offering alternative keys for standard keys is dumb. This feels like an accident that nobody caught until later, and made a backwards compatible monkey patch.

Why haven’t you considered protobuf if you have concerns about message size, and are working in power constraints? UTF/Base64 is kind of shitte for power savings.


Hahaha. My dude, it's a damn good thing I'm not applying for your startup! And a great thing that my current and past employers didn't think I should resign. So you know what? I'm going to keep up this charade for a bit longer. Besides, the pay is great! :-D

The conical issue with regards fields and signatures is definitely and without question a valid point and should probably be removed from the spec.

You raise good points. I'll definitely reconsider your feedback in future iterations!

Your point about a logging field is interesting, but the MID field could be used in an out of band error acknowledgment.

Protobuf is great - but what if you don't need or want it?

Thanks for taking the time to comment!


Projects like these need a motivation section.

It’s not clear what the usecase is.


I love open source projects with no documentation or Getting Started guides.....said no one ever.


They put everything in umf.md rather than README.md https://github.com/cjus/umf/blob/master/umf.md


In an earlier version of the docs I actually had the spec in the readme but felt the sheer size was enough to send folks running. Using the readme to describe the spec (which I realize needs work!) allowed for a clean separation.


You should probably link to the spec or something in the README so it's easier for people to find.


Wow - I really ruffled some feathers with this post :-D As I scanned the feedback, I found myself agreeing with some, but not all, of the comments.

I actually found some of the direct attacks amusing and got a good laugh. That said, I'd like to thank everyone who took the time to comment. One of the goals of any specification should be to iterate the spec based on the valuable feedback of others. I'll definitely take this opportunity to do that.

Thanks again.


Doesn't JSON-LD (especially AcitivityPub+co) mostly deal with this already?


https://github.com/edn-format/edn

edn is an extensible data notation. A superset of edn is used by Clojure to represent programs, and it is used by Datomic and other applications as a data transfer format. This spec describes edn in isolation from those and other specific use cases, to help facilitate implementation of readers and writers in other languages, and for other uses.


Reading through the comments here I'm realizing that my single biggest error may have been the use of the word "Universal" :-D When you consider the spec as a "basis for agreement" between distributed application authors and not a unified theory of messaging - then the spec becomes a lot clearer.


I'll just note, once again, how verbose, unattractive & difficult to parse JSON is compared to S-expressions. Here are several of the examples in the spec in both formats, in order (I've rearranged the field so that required fields come first & optional fields come later).

2.1:

    {
      "mid": "ef5a7369-f0b9-4143-a49d-2b9c7ee51117",
      "rmid": "66c61afc-037b-4229-ace4-5ec4d788903e",
      "to": "uid:123",
      "from": "uid:56",
      "type": "dm",
      "version": "UMF/1.4.3",
      "priority": "10",
      "timestamp": "2013-09-29T10:40Z",
      "body": {
        "message": "How is it going?"
      }
    }

    (message
     ef5a7369-f0b9-4143-a49d-2b9c7ee51117
     (to uid:123)
     (from uid:56)
     (version SMF/1.4.3)
     (timestamp 2013-09-29T10:40Z)
     (rmid 66c61afc-037b-4229-ace4-5ec4d788903e)
     (type dm)
     (priority 10)
     (body
      (message "How is it going?")))
2.2.11:

    {
      "mid": "ef5a7369-f0b9-4143-a49d-2b9c7ee51117",
      "to": "uid:56",
      "from": "game:store",
      "version": "UMF/1.3",
      "timestamp": "2013-09-29T10:40Z",
      "body": {
        "type": "store:purchase",
        "itemID": "5x:winnings:multiplier",
        "expiration": "2014-02-10T10:40Z"
      }
    }

    (message
     ef5a7369-f0b9-4143-a49d-2b9c7ee51117
     (to uid:56)
     (from game:store)
     (version UMF/1.3)
     (timestamp 2013-09-29T10:40Z)
     (body (type store:purchase)
           (itemID "5x:winnings:multiplier")
           (expiration 2014-02-10T10:40Z)))
2.2.11.2

Note how JSON has to rely on metadata to indicate that a Base64 sequence, whereas it's natively supported by canonical S-expressions. Note also how the S-expression format natively supports types ('display hints') for its values.

    {
      "mid": "ef5a7369-f0b9-4143-a49d-2b9c7ee51117",
      "to": "uid:134",
      "from": "uid:56",
      "version": "UMF/1.3",
      "timestamp": "2013-09-29T10:40Z",
      "body": {
        "type": "private:message",
        "contentType": "text/plain",
        "base64": "SSBzZWUgeW91IHRvb2sgdGhlIHRyb3VibGUgdG8gZGVjb2RlIHRoaXMgbWVzc2FnZS4="
      }
    }

    (message
     ef5a7369-f0b9-4143-a49d-2b9c7ee51117
     (to uid:134)
     (from uid:56)
     (version SMF/1.3)
     (timestamp 2013-09-29T10:40Z)
     (body
      (type private:message)
      [text/plain]|SSBzZWUgeW91IHRvb2sgdGhlIHRyb3VibGUgdG8gZGVjb2RlIHRoaXMgbWVzc2FnZS4=|))
2.2.11.3

One might expect that S-expressions might shine when it comes to sending multiple items, and of course one would be correct.

Also note how the parallel structure of the message & message/body/message objects raises the question of whether the message/body/message schema should also be UMF.

    {
      "mid": "ef5a7369-f0b9-4143-a49d-2b9c7ee51117",
      "to": "uid:134",
      "from": "chat:room:14",
      "version": "UMF/1.3",
      "timestamp": "2013-09-29T10:40Z",
      "body": {
        "type": "chat:messages",
        "messages": [
          {
            "from": "moderator",
            "text": "Susan welcome to chat Nation NYC",
            "ts": "2013-09-29T10:34Z"
          },
          {
            "from": "uid:16",
            "text": "Rex, you are one lucky SOB!",
            "ts": "2013-09-29T10:30Z"
          },
          {
            "from": "uid:133",
            "text": "Rex you're going down this next round",
            "ts": "2013-09-29T10:31Z"
          }
        ]
      }
    }

    (message
     ef5a7369-f0b9-4143-a49d-2b9c7ee51117
     (to uid:134)
     (from chat:room:14)
     (version SMF/1.3)
     (timestamp 2013-09-29T10:40Z)
     (body
      (type chat:messages)
      (messages 
       (message
        (from moderator)
        (text "Susan welcome to chat Nation NYC")
        (ts 2013-09-29T10:34Z))
       (message
        (from uid:16)
        (text "Rex, you are one lucky SOB!")
        (ts 2013-09-29T10:30Z))
       (message
        (from uid:133)
        (text "Rex you're going down this next round")
        (ts 2013-09-29T10:31Z)))))
2.2.17

Note that there is a complex canonicalisation procedure for the JSON object, and that the sender must mutate the signed object; by contrast, the S-expression format is properly layered and doesn't mutate signed objects (which implies that it's possible to chain signatures cleanly).

    {
      "mid": "ef5a7369-f0b9-4143-a49d-2b9c7ee51117",
      "to": "uid:123",
      "from": "uid:56",
      "version": "UMF/1.4.6",
      "signature": "c0fa1bc00531bd78ef38c628449c5102aeabd49b5dc3a2a516ea6ea959d6658e",
      "body": {}
    }

    (signature
     (message
      ef5a7369-f0b9-4143-a49d-2b9c7ee51117
      (to uid:123)
      (from uid:56)
      (version SMF/1.4.6)
      (body))
     |c0fa1bc00531bd78ef38c628449c5102aeabd49b5dc3a2a516ea6ea959d6658e|)
It's not to late to switch away from JSON, it really isn't.


Looking at the examples and thinking about how a C like program would process them, the S-Expressions look way more complex.

With JSON you know immediately what kind of datatype you are dealing with. You see a { you allocate an associative array, or if you see a [ you know you're about to get an ordered list. With S-Expressions it seems like you need to parse the entire thing and then figure out what kind of data structure you have.

In fact there doesn't appear to be any indicator at. Looking at 2.2.11.3 we see in the JSON that "messages" is an ordered list, but the content of the message is an associative array, but in the S-Expression they look identical.

So in C-like land you would end up with a big nested mess of arrays that are slow to parse and even harder to figure out the address of any object. There's a ton of friction that you don't have with JSON data.


When I need to parse or validate S-expressions, I just write the functions (here message, to, from, timestamp, etc.) so that eval()ing the S-expressions either validates it or returns whatever data structure I need.

So the easiest way would be to use or code a small lisp interpreter in C and eval the S-expression. For example, one could use Chicken Scheme to do so.


Or we could...not...evaluate random code potentially coming from hostile environments. That would also be cool and good.

And, yes, it's possible to have vulnerabilities in a JSON parser--but it is orders of magnitude easier to have them in an arbitrary language parser.


If you evaluate it in an environment where only the functions you choose are defined, the security risk is nil.

Validating a document is a complex, domain-dependant problem. It is far easier to create a secure Domain-Specific Language to handle this than to end up with an accidentally Turing complete abomination like XSLT: http://www.unidex.com/turing/utm.htm


>If you evaluate it in an environment where only the functions you choose are defined, the security risk is nil.

Oh. So all you have to do is write perfectly secure code and run it in a perfectly secure environment, and nothing bad can possibly happen.

Well shit, why didn't anyone else ever think of that?


> When I need to parse or validate S-expressions, I just write the functions (here message, to, from, timestamp, etc.) so that eval()ing the S-expressions either validates it or returns whatever data structure I need.

facepalm

As soon as you've decided to call an eval() function on potentially untrusted data, you've lost to an attacker.


I want to be a fan of csexps. I'm a big fan of SPKI/SDSI conceptually. Unfortunately I lack your enthusiasm for trying to evangelize them, and think JSON is probably here to stay.

That said, regarding JSON and the inclusion of self-describing encoding information for e.g. Base64, I created a microformat for that:

https://www.tjson.org/


This reminds me of Meteor's ejson but much less verbose. Very nice. Have you thought about adding a way to specify the type of object? Maybe something like `"field:<O(Post)>": ...`.


> It's not to late to switch away from JSON, it really isn't.

Yes, it is. People are already used that in dynamic languages (javascript, python, ruby), you can work with unknown structures in a performant way, and they will be mapped properly to the underlying data model.

They are not going to switch to something where you need to have a schema just to parse it properly.


> People are already used that in dynamic languages (javascript, python, ruby), you can work with unknown structures in a performant way, and they will be mapped properly to the underlying data model.

That's actually one of my concerns with JSON: it doesn't really convey the underlying data model. Sure, it can handle numbers — but it can't handle constraints like 'age must be positive.' Sure, it can handle strings — but there's no way in JSON to differentiate between Base64-encoded bytes & a normal string.

JSON lets one play with data, but one never knows if it's actually valid. It's dynamic typing, applied to data itself.


Exactly, it conveys underlying data model of dynamic languages, or to be specific, of a common subset of their data types.

As for data validity, this is completely separate question. I don't believe that validation should be a part of the language or data format -- my language lets me write 'age = "yellow"', and so should my data format.


How does it diferentiate booleans from strings? Does every major language have an implementation ready to use?


I've always liked s-expressions, unfortunately, it hasn't caught on in the circles I travel.


WS-* meets JSON.


This spec should be rebuilt from the start to extend http://cloudevents.io .




Relevant comment by Linus Torvalds on standards: "I've said this before, and I'll say it again: a standards paper is just so much toilet paper when it conflicts with reality. It has absolutely _zero_ relevance. In fact, I'll take real toilet paper over standards any day, because at least that way I won't have splinters and ink up my arse." http://www.yodaiken.com/2018/06/07/torvalds-on-aliasing/


Ah Linus - enough said. The UMF spec wasn't intended to be a standards paper. You might be able to tell it isn't formatted as such. Based on the comments in this post - it has at the very least helped spark interesting feedback and debate.


Not every tool is a competing standard.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: