
Bad JSON Parsers - lovasoa
https://github.com/lovasoa/bad_json_parsers
======
thegeomaster
The JSON spec [1] clearly states, in section 9:

    
    
      An implementation may set limits on the maximum depth of
      nesting.
    

So, this is kind of a pointless exercise. The author would do well to inform
themselves better before slinging epithets such as "bad" for a library.

[1]:
[https://tools.ietf.org/html/rfc7159](https://tools.ietf.org/html/rfc7159)

~~~
ken
I would be perfectly OK with that answer _if_ they documented that they use
the C stack and therefore have a small finite depth limit, and/or threw some
sort of TooMuchNesting exception before they hit that limit.

AFAICT, many/most of these libraries do neither of the above. There's nothing
in the documentation to suggest they'll have trouble with deeply nested
inputs, and their response to such a situation is to simply crash the process.
I'd call that a bug.

I don't consider it acceptable for a library to simply say "the spec says
we're technically allowed to 'set limits' on what we accept, so here's some
nasal demons". If you've got limits, you've got to document them, or at least
make them recoverable.

~~~
asveikau
It's hard for a library to portably know when the C stack will exhaust or give
a concrete estimate that makes sense for documentation purposes.

Firstly, that concept is not exposed at all to standard complaint C code. The
standard is vague enough to allow for many possible variations of the program
stack abstraction. So you would not be portable, standard C.

Second, the library author doesn't know ahead of time how much stack space
_you_ , the caller, have consumed before calling the library. Do you call it
deep inside some ui framework with a big stack? With a huge i/o buffer on the
stack? They don't know.

Translating all of this into "this is how deeply nested your json can be" is
not a sane exercise, at best you can give a machine-specific guess that will
vary a bit with your own application's stack needs.

But perhaps one can side-step all of that and get most of the way with an
artificial, arbitrary-constant runtime depth check that is safely below likely
actual stack limits.

~~~
ken
Sure, that’s all true. So don’t use the C stack for parsing. It’s not
difficult.

------
vesinisa
It would've been interesting to know which platform was the JavaScript test
performed on (judging by source code, likely Node.js). However, on my current
machine Chrome chokes to stack overflow above nesting level 125,778.

More amazingly, Firefox is currently happily parsing a JSON array with
50,000,000 levels of nesting at full CPU single core utilization and consuming
about 12 GB of memory. This was after an array of depth 10,000,000 was
successfully parsed after peaking around ~8 GB of RAM. So Firefox's parser
seems to be only constrained by available memory!

Also - while this is running in the console the Firefox UI is completely
usable and I can browse the web in another tab. Mozilla has come a long way!

E: The 50,000,000 nesting level array parsing eventually failed with "Error:
out of memory".

~~~
anoncake
Great. Is anything else usable too?

~~~
vesinisa
Pretty much, but RAM utilization is of course closing at 100% so starting any
new programs would fail on OS level.

------
stickfigure
My takeaway from this is that you can submit a long string of brackets to REST
apps built with Haskell and Ruby/Oj and crash the process and/or machine by
consuming all available RAM. Whereas the smarter JSON processors near the top
of the list will abort processing pathological inputs before serious harm
occurs.

Wait, which JSON parsers are supposed to be "bad"?

~~~
Spivak
Just because a structure is deeply nested doesn’t mean that it consumes a lot
of memory.

And just because your payload is massive and consumes a lot of memory doesn’t
mean it’s deeply nested.

I doubt any of the parsers are resilient against storing large documents in
memory. What makes them bad is that structures that would easily fit in memory
are unparsable.

In most cases your web server imposed a limit on the size of the HTTP payload
so you wouldn’t normally be able to deliver a 128GiB JSON file anyway. It’s
not like nested structures are zip-bombs.

~~~
stickfigure
My comment was mostly tongue-in-cheek but there's some truth to it. Since the
receiver creates a navigable data structure with 64-bit pointers, every two
bytes sent ("[]") becomes 20+ bytes in RAM (low estimate!). An order of
magnitude magnification helps.

------
abrodersen
"The tar archive spec does not specify an upper limit on archive size, but my
kernel keeps saying 'no space left on device'. It's obviously a bad kernel."

~~~
LukeShu
On one hand, yes. On the other hand, the standard Ruby "json" gem can be made
to overflow on just 202 bytes of valid JSON input.

~~~
jrochkind1
I mean, it'd be a pathological 202 byte JSON, but yeah (I guess it could be a
DOS attack of some kind, actually, hmm...)

Ruby is a good example, because the `oj` gem, on presumably the same system,
is listed as "infinite." Obviously not truly "infinite", eventually it'll run
out of RAM -- but this shows it is _not_ an issue of machine resources really.

As the OP notes, it's because if you implement using recursion, you'll run out
of stack in most languages (now let's start talking about tail call
optimization...), but this isn't the only implementation choice, you _can_
implement to not have this limitation, and probably should.

~~~
nwellnhof
Tail call optimization wouldn't help with parsing JSON.

~~~
imtringued
Usually the solution is to create an array on the heap and manage the stack
yourself.

------
jandrese
The JSON:XS documentation has this to say about the $max_depth field setting
for the parser:

    
    
        Second, you need to avoid resource-starving attacks. That means you should
        limit the size of JSON texts you accept, or make sure then when your
        resources run out, that's just fine (e.g. by using a separate process that
        can crash safely). The size of a JSON text in octets or characters is
        usually a good indication of the size of the resources required to decode
        it into a Perl structure. While JSON::XS can check the size of the JSON
        text, it might be too late when you already have it in memory, so you
        might want to check the size before you accept the string.
    
        Third, JSON::XS recurses using the C stack when decoding objects and
        arrays. The C stack is a limited resource: for instance, on my amd64
        machine with 8MB of stack size I can decode around 180k nested arrays but
        only 14k nested JSON objects (due to perl itself recursing deeply on croak
        to free the temporary). If that is exceeded, the program crashes. To be
        conservative, the default nesting limit is set to 512. If your process has
        a smaller stack, you should adjust this setting accordingly with the
        "max_depth" method.
    

So accepting an unlimited depth structure by default is probably not a good
idea. There's a flag you can set if you need to parse something like that, but
the defaults are set to parse anything sensible and reject documents that may
be malicious.

~~~
lovasoa
My takeaway would be "using the C stack to parse JSON is probably not a good
idea", not "accepting an unlimited depth structure by default is probably not
a good idea".

This limit is entirely artificial, and due only to the design of the parser.

As the documentation you quote says, you should limit the size of JSON texts
you accept. If you don't, then you are at risk of a memory exhaustion attack.
But this is completely unrelated to the maximum depth you accept. If your
parser doesn't use the C stack to parse recursive structures, then deeply
nested payloads will parse just fine, without using an abnormal amount of
memory.

~~~
chrismorgan
I disagree with your conclusion. I would instead say that the _combination_ of
using the stack to parse JSON and allowing unlimited depth is the problem. In
practice, almost no one ever deals with JSON structures even a hundred levels
deep (I’d quite seriously guess it to be less than one in a million
applications that desire it), so if you artificially cap it to protect against
stack overflow you can then safely use the stack, thereby getting better
performance by reducing the required allocations, while also protecting
against one DoS attack vector.

~~~
lovasoa
You never "artificially cap it to protect against stack overflow": Most
libraries that cap it artificially cap it to a fixed value, that may or may
not protect against stack overflow depending on how large the stack already is
when you call the parser.

~~~
chrismorgan
These caps we’re talking about are normally a small fraction of the total
stack size, and few applications run anywhere _near_ the total stack size. I
acknowledge your point (and had it in mind when I wrote my first comment), but
maintain my position that this capping _is_ protecting against the stack
overflow DoS attack: if you’re able to get a stack overflow with your capped
JSON parser, you were probably already able to get a stack overflow, because
it’s not adding overly much to the stack.

The problem that’s being protected against is _user-controlled_ stack
overflow—that’s what makes it a DoS _attack_.

------
zenhack
I'd argue this is a pitfall that languages can and should solve. There's no
reason why writing a naive recursive descent parser should cause this problem.
The fix, using a separate stack data structure, is telling, since it's the
same algorithm, with the same space complexity, you're just saying "my
language's call stack isn't a usable implementation, so I need a different
stack." Why not just fix the call stack?

In languages like rust/c/c++, this might be hard, since you can't grow the
stack by moving them (since you'd need to update pointers, which are too
visible in those languages, and even identifying a pointer at runtime is
undecidable), and segmented stacks cause the "hot-split" problem.

But most languages could just make this problem go away by growing the call
stack when needed. A few of them (incl. Haskell) do. I'm following suit in a
language I'm working on.

~~~
anaphor
Most purely functional languages (like Haskell) don't really have a "call
stack", but the computation is carried out by graph reduction, so it can
obviously detect cycles and do some optimizations there, depending on the type
of recursion.

If you want the gory details [https://www.microsoft.com/en-us/research/wp-
content/uploads/...](https://www.microsoft.com/en-us/research/wp-
content/uploads/1992/04/spineless-tagless-gmachine.pdf)

~~~
zenhack
Yeah, I've read the STG paper, though its been a while. IIRC there isn't a
call stack as we'd normally think of it, but there's still a stack of pending
pattern matches, so you actually do hit the same issues. I vaguely recall
making deep recursion "just work" in GHC was something that happened just in
the past couple years.

------
jwilk
Related: "Parsing JSON is a Minefield"
[http://seriot.ch/parsing_json.php](http://seriot.ch/parsing_json.php)

Discussed on HN:

• in 2016:
[https://news.ycombinator.com/item?id=12796556](https://news.ycombinator.com/item?id=12796556)

• in 2018:
[https://news.ycombinator.com/item?id=16897061](https://news.ycombinator.com/item?id=16897061)

------
bloody-crow
The repo is calling it bad, but in production I'd actually prefer a parser
that throws an approptiate error based on a setting that can be configured
with some reasonable defaults over something that just silently chugging along
until it runs out of memory.

I can imagine it'd be a lot easier to overwhelm and DDoS a server that
attempts to parse incoming JSON requests without any depth bounds too.

~~~
Spivak
Just because your data structure is deeply nested doesn’t mean it takes a lot
of memory.

[[[[]]]] is still only 4 objects worth of memory on a stack.

------
mojzu
I think the reason that the nesting level limit is 128 and the file size is
256 bytes for serde_json in this test is that the recursion_limit attribute is
not configured.

[https://doc.rust-lang.org/reference/attributes/limits.html](https://doc.rust-
lang.org/reference/attributes/limits.html)

The default value of this attribute is 128 which would explain both results,
I'm guessing with configuration you could significantly increase both values
at the expense of compilation time.

~~~
edflsafoiewq
That's for the recursion limit on compile time stuff, like how many layers of
macros it will expand.

serde does have this though:
[https://docs.rs/serde_json/1.0.41/serde_json/struct.Deserial...](https://docs.rs/serde_json/1.0.41/serde_json/struct.Deserializer.html#method.disable_recursion_limit)

~~~
mojzu
For some reason I thought serde was using macros to perform deserialisation.
Now I know that isn't the case and it's a coincidence the default recursion
limit and the serde default nesting limit is 128.

~~~
lovasoa
Even if serde does indeed use macros, they run at compile time. The limit on
macro recursion depth has an impact on what programs do compile, but it has no
impact on the runtime behavior of these programs.

------
et2o
Is this relevant/are there situations where people nest JSON even 100x/1000x?
I have not come across something like that.

~~~
bargle0
The question isn't necessarily what might be found in nature, but what might
be intentionally constructed in order to attack something. Without knowing
more, the segfault in the C++ library is somewhat alarming.

~~~
kibwen
On the topic of malicious input, the serde_json example seems to be
deliberately limiting recursion depth in order to head off DoS attacks:

 _" serde_json has a recursion limit to protect from malicious clients sending
a deeply recursive structure and DOS-ing a server. However, it does have an
optional feature unbounded_depth that disables this protection. It still uses
the stack though, so it will still eventually blow the stack instead of using
all available memory"_

[https://github.com/lovasoa/bad_json_parsers/issues/7](https://github.com/lovasoa/bad_json_parsers/issues/7)

This begins to suggest that it may be downright desirable for servers to toss
out deeply-nested JSON, regardless of the JSON specification.

~~~
vbezhenar
Why not implement parsing without recursion? It doesn't look like a
particularly hard task for me. Why restrict users with artificial limitations?

~~~
marcosdumay
You'll need a stack anyway, because the structure is logically a tree. Better
to use the call stack, that is already there and heavily optimized.

~~~
Spivak
Naive person here. Isn’t the call stack a lot heavier than pushing data
structures on a std::stack?

~~~
MaulingMonkey
A recursive parsing call stack has the overhead of pushing/popping return
addresses and maybe a couple other registers. Maybe a bit more if there are
security protections like stack cookies that didn't get optimized away, or if
shadow stacks are enabled. It's not free, but is pretty cheap. A non-recursive
use of std::stack dodges some of that, but instead has the overhead of dynamic
allocation, which I'm generally going to consider more expensive - but does
have the advantage of being more of a once-off cost - so perhaps it'd be
cheaper for long but shallow JSON objects? The cost of the call stack vs
std::stack is probably dwarfed by the cost of allocating
arrays/objects/strings and decoding in either case.

If you want to over-optimize for that specific case _anyways_ , you can take a
hybrid approach with smaller stacks kept on the stack in a single linear root
blob, with larger stacks falling back onto the heap, using a non-recursive
style.

However, if you're capping JSON depths as an anti-DoS measure _anyways_ (your
parser isn't the only thing possibly vulnerable to blowing the stack via
recursion - anything dealing with the parsed result is also in danger if it
can be recursive), the heap fallback may be entirely dead code anyways... so
why bother coding it?

------
k__
I know, Halloween is over, but currently I'm working with an API where the
returned JSON is created by manual string concatinations. Sometimes commas and
quotation marks go missing.

~~~
kardos
Does the API specify under which conditions you can expect missing characters?
There should be an undefined behaviour angle here somewhere

~~~
k__
I wish.

The dev simply writes new string concatination code every time a new endpoint
is needed, so I can't even hope for some bugs being gone for good after he
tested his implementation with a few endpoints.

------
jchw
They’ve changed the title[1] to remove the phrasing “bad,” I think the title
should be updated here as well, even if the repo name still reflects the
original title. It’s not reasonable to use this single criteria as a measure
of “badness,” as there are _many_ different variables. I would place
security/memory safety 1000x higher than recursion limits. I have never hit
JSON recursion limits in production.

[1]:
[https://github.com/lovasoa/bad_json_parsers/commit/16f6ed75b...](https://github.com/lovasoa/bad_json_parsers/commit/16f6ed75b4edcac4998878229783eadaa7212f18)

------
cloudkj
Coincidentally, I just wrote a simple JSON parser the other day as a toy
exercise. A simplified snippet of parsing code (handling only arrays) relevant
to the discussion here would be something like:

    
    
      def parse(text):
          stack = []
          index = 0
          result = None
          while index < len(text):
              char = text[index]
              val = None
              if char == '[':
                  stack.append([])
              elif char == ']':
                  val = stack.pop()
              index += 1
              if val is not None:
                  if stack:
                      stack[-1].append(val)
                  else:
                      result = val
          return result
    

Using the test utilities in the repo indicate that the parsing logic can
handle arbitrarily nested arrays (e.g. up to the 5,000,000 max in the test
script), bound by the limits of the heap.

It seems like the main criticism here is against recursive implementations. Or
am I missing something?

~~~
sansnomme
A better question to ask is: Is it a success of CS education since developers
know when to make tradeoffs in software engineering and go for the most easily
understandable code (recursive implementation used in most pseudo code) since
deeply recursive cases are rare, or is it a failure that software engineers
cannot figure out how to convert a recursive algorithm into an iterative
implementation for greater memory efficiency?

~~~
kadoban
It doesn't necessarily mean anything that deeply nested cases are rare. If
you're making a general library, someone is going to throw user input at it,
and some users will try to break your system. If it can avoid a crash in that
case, it usually should.

------
twir
One of JSON's selling-points is to be human-readable. I would struggle reading
any JSON document with a nesting level greater than 6 or so. Why would I ever
want a nesting level > 100?

~~~
hombre_fatal
This doesn't really make sense to me. Just because JSON is human-readable
doesn't mean that humans are meant to read any possible serialized data nor
that it should always be an ergonomic read. Not everything you serialize is
just a config file or hand-edited data. It may be arbitrarily dense with
arbitrary data.

To me, this is kinda like seeing a JSON array of user IDs coming over the
network and going "Huh? Why not usernames? I thought JSON was supposed to be
human-readable!"

~~~
twir
I see your point; although if you look at the specs for a lot of easy-to-parse
formats for computers, a stated design goal is also easy-for-humans (e.g.
Markdown, YAML).

Large, complex object hierarchies with lots of nesting might make more sense
represented in binary (e.g. Avro).

I realize I'm making a little bit of a McLuhan-esque argument in a largely
computer science-oriented context, but I hope you can see what I'm getting at.

------
jackcviers3
Recursive parsers need not use the stack for recursion, they can use the heap
instead through trampolining, and not have an issue until memory is fully
consumed. That would impact performance somewhat - so users should also be
allowed to choose when they need the trampolined version. They should still
limit recursion depth and allow the user to set a value higher than the
default.

I can't help but think that the lazily evaluated Haskell version score of
infinity is probably somewhat misleading, as once you access something within
a large enough tree you'll probably run out of memory pretty quickly.

~~~
saagarjha
It’d be nice if it was possible to annotate recursive functions as using the
heap instead of the stack for recursion. I’ve seen too many implementations of
people writing a poor man’s stack frame to store on the heap and it just
totally destroys any sort of elegance the original code had.

------
philbo
Shameless self-promotion: I wrote a JSON parser for node.js that specifically
solves this problem of parsing deeply-nested and/or huge JSON payloads.

[https://www.npmjs.com/package/bfj](https://www.npmjs.com/package/bfj)

It's entirely asynchronous, yielding frequently to avoid monopolising the
event loop. And it uses a fixed-length buffer so it never exhausts available
memory. The tradeoff is that it's slow. But afaik it parses JSON payloads of
any size and any depth.

------
twic
I thought i'd try some other Java parsers, but the Maven build doesn't work
with current Maven, and it's not clear what you'd feed to test_parser.sh
anyway, as there's no driver script for the Java case.

One of the things i'm passionate about is always setting up a project in such
a way that someone new to it can download, build, and use it with an absolute
minimum of fuss. That means making the most of the build tools, and scripting
everything you can't do through the build tool. I don't always meet this
standard myself, because i am a weak and wretched human being, but it's great
when it's done.

Gradle is good here, because the wrapper means you always get a specific
Gradle version, although it can't control the JDK version, which is
increasingly painful in the post-8 age. Cargo is pretty good, because the user
just needs to install rustup, then they get a precisely controlled Rust and
Cargo version via the rust-toolchain file. I'd love it if there was a rustup-
wrapper which i could check in, which would obviate the need for a global
rustup installation. Ruby, Python, and Node do alright, but you have to ask
the user to install the version manager of your choice. C is absolutely
miserable.

Since this project has all of the above and more, making it as self-sufficient
as i would like would be quite an effort!

~~~
megous
How are C build systems miserable? If you're building for a general Linux
distro, you'll not have much control over the dependencies, but that's not a
problem with C, but with how programs are distributed in the GNU/Linux
distribution model.

~~~
twic
AFAIK, there is no tool that lets you specify a compiler version in your
project, then automatically build with that compiler. The equivalent of
rustup, rvm, nvm, etc.

There is also no standard package manager.

It is very hard to persuade GCC to use a specific set of libraries that you
have obtained using a package manager, or vendored or whatever, rather than
getting distracted by random libraries it happens to find in in /lib etc. The
root of the pain is that you really want to use the host system's platform
libraries, the ones which implement the POSIX API, but once the compiler can
see those, it tends to go off looking for other things in the same place. In
my experience, at least!

These deficiencies are tied up with the historical interrelationship of C and
the operating system, but that doesn't mean that they aren't real.

------
cellularmitosis
Related question: why is the stack limit so tiny in modern operating systems?
Why do we allow programs free reign of the heap but not the stack?

Edit: I hadn’t considered that all of a processes’s threads share the same
address space. In a 32-bit address space, a 4MB stack limits you to 1024
threads (with nothing left over for heap).

However, perhaps the question still stands: why use a small stack size on a
64-bit system.

~~~
jlokier
One of the reasons is that if you momentarily use a very large amount of stack
to calculate something (say gigabytes), and then return, the large part of the
stack which is no longer used stays allocated, full of unused data taking up
RAM for the remaining life of the process, which might be a long time.

If a heap-allocating algorithm does the same thing, it's likely to free the
memory when it's finished with it.

------
skrebbel
the sort order is incorrect. he says they sort from worst to best, but the
worst one is the one that can segfault and the perfectly good ones are all the
other ones that, if i read this correctly, just throw an exception.

------
microcolonel
In Clojure, we've taken to using a macro to generate custom parser drivers
(using Jackson underneath) from schemas. If we encounter an unexpected token
or invalid (according to the schema) value, we drop the whole input.

------
ape4
I expecting a report on parsers that crash for challenging syntax. eg: {
hello:
'world\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'
}

~~~
lovasoa
Independently of the backslashes, this is not valid JSON: keys have to be
quoted, and JSON uses double quotes, not single quotes...

~~~
cerberusss
Hey look, a human-based JSON parser. I wonder what their maximum nesting level
is...

~~~
dmit
Over-under on when the first "ML-based approach to parsing JSON" story hits
HN: 1.5 years from now.

------
mynegation
Just couple of weeks ago I had to debug a production problem when Jackson
parser failed on parsing prod-specific configuration (in JSON). Turns out the
problem was a trailing comma in the array of values. IntelliJ IDEA JSON syntax
highlighter did not flag it, due to some obscure setting.

I also had problems with some JSON parsers allowing NaN and Infinity as double
values and others - failing on them.

~~~
microcolonel
For future reference, if your JSON parser accepts NaN, Infinity, or trailing
commas, it is broken.

I suspect IntelliJ is using their JavaScript grammar for JSON.

------
rpmisms
For a moment, I was worried that someone had found the JSON parser I wrote in
Python years ago. My career would be over.

------
ascotan
Regardless of the quality of this post, the author does highlight a security
issue here. The RFC states that it's up to the parser to define the limits and
different parsers may behave differently in the face of malicious inputs.
That's why you should probably test them...

------
MadWombat
Nice effort, but seems to miss things. RapidJSON for C++ is a very popular
library. And Go has both the standard library JSON parser and a couple of
other popular implementations. I would gladly add implementations for Go, but
it is unclear how do do that.

~~~
shadowgovt
Since the document is on GitHub, it's perhaps a safe assumption the author may
be amenable to pull requests (and if they're not, you can always fork their
repo and add additional information; they MIT licensed the whole thing).

~~~
lovasoa
Indeed, I'd love to get pull requests. It's mentioned in the "Remarks" at the
end.

------
bjoli
Guile scheme expands the call stack until until memory is exhausted and then
raises an exception. That is pretty damn comfy, because it means that building
lists with recursion is just as fast as using tail recursion and a reverse! at
the end

------
bchociej
Interesting results e.g. on Ruby. I did give the javascript test file a whirl
and was able to run it just fine with a nesting level of 1,000,000, so I
wonder if something else is happening there.

------
kbumsik
JSON nested more than 100??

Why calling a JSON parsers are "bad" when the JSON object is already gone far
beyond "bad"?

PLEASE don't call anything bad just because it doesn't fit in your own
standard.

------
justinpombrio
Much more detailed comparison of JSON parsers:
[http://seriot.ch/parsing_json.php](http://seriot.ch/parsing_json.php)

------
cobbzilla
For the Java test, I’d love to see how Jackson compares to Gson.

~~~
lovasoa
Someone just contributed a pull request with results for Jackson in java

~~~
cobbzilla
And it’s already been merged (nice turnaround time!). It narrowly beats Gson.

------
tankfeeder
PicoLisp successfully parsed 20M nested brackets, its maximum i can do on 8GB
laptop.

------
Joyfield
"Ubuntu Linux 4.10.0-35-generic SMP x86_64 with 8Gb RAM" Gb != GB.

------
msoad
if you allow a JSON parser use all of memory to go deep, you know what
attackers will do... As other commenters mentioned JSON spec allows setting a
limit on the depth.

~~~
zenhack
It isn't a matter of memory usage; the memory usage is still linear in the
size of the input regardless of the depth. If an attacker can get you to oom
by feeding you a deeply nested data structure, they can do the same with a
shallow one just as easily:

"[0, 0, 0, 0, 0, <...gigabytes of zeros>, 0 0, 0, 0]"

So you need to limit the overall input size regardless of whether you already
have a depth limit in place.

------
jcmontx
Has anyone run this for Newtonsoft.Json?

------
Roonerelli
there's a .net parser in the source code but not in the results

~~~
lovasoa
Would you be interested in making a PR adding the .NET project to travis and
the results to the README ?

------
vbezhenar
Java Jackson is bad too.

