
Comments in JSON - p3drosola
http://fadefade.com/json-comments.html
======
LinaLauneBaer
There is a interview with the inventor of JSON somewhere. In that interview he
explained why he did not allow comments in JSON like in XML. He said - if I
remember correctly - that it was intentional to not have comments in JSON. The
reason way that comments could be misused to add additional information for a
parser. For example in XML you could use comments and a special parser could
use these comments to create code while parsing. He did not want that. He
wanted every JSON parser to be a JSON parser and nothing more. If you wanted
to have comments in JSON he said that you could simply make the comments
inline and have a convention for the keys which are comments for example every
key ending with _comment could have a value which is then seen as a comment by
the application but not by the parser.

~~~
wissler
"I removed comments from JSON because I saw people were using them to hold
parsing directives, a practice which would have destroyed interoperability."
\-- Crockford

This is horrific design reasoning. It's an authoritarian, presumptuous,
"punish everyone in the classroom because one child misbehaves" mentality.

Comments would be useful in JSON _because comments are useful in code, and
JSON is code_. For example, I might have a config file that I'm typing in that
I want to leave a documentation trail for.

Don't tell me I can do a silly thing like redefine a field, as if it's "neat".
It's an abomination that I have to resort to such things. And guess what: by
resorting to such things I can still do precisely what Crockford claims he was
trying to prevent. So his rationale is not only insulting to one's
intelligence, it's sheer stupidity.

~~~
nonchalance
> and JSON is code

JSON is data. It appears to be JS code, but JSON is data. Data is not code (
[http://www.c2.com/cgi-
bin/wiki?DataAndCodeAreNotTheSameThing](http://www.c2.com/cgi-
bin/wiki?DataAndCodeAreNotTheSameThing) ). That's why the idea of data holding
parsing directives is silly. If you want to do that, then embed that in the
data (hold a MsgType key in the data records). There's no need for comments
unless you are trying to use it for something other than raw data.

~~~
wissler
Nonsense. This is just more arrogance.

JSON is code because I use it as code. It's not your business to tell me it's
not code -- _you haven 't seen how I'm using it_. And don't go chirping that I
should only do things your way, it's none of your god damned business what I'm
using it for.

Further, if JSON was really only data, then it's an incredibly stupid way to
store data, given that it has a human-readable syntax that the computer can
only deal with after it's been parsed. As data, it's bloated and inefficient.
To the extent that JSON is a good format, it's code. To the extent that it's
data, it's not a good format.

~~~
defen
> JSON is code because I use it as code

You can't use JSON to compute things, therefore it is not code (unless you are
willing to concede that any document format is code).

~~~
TeMPOraL
> _unless you are willing to concede that any document format is code_

Because it is. Data vs. code distinction is arbitrary. The following sequence
of characters:

"echo 'foobar';"

can be interpreted as describing a string, a series of tokens, a piece of
code, a piece of music or a small icon, whatever interpretation you choose.

~~~
defen
Yes, I understand that "code is data". This does not mean that data, in
general, is code; unless you are willing to make the words completely
meaningless. "Code" requires some notion of an execution platform/environment,
which does not exist for arbitrary data. Here is a string: "the quick brown
fox jumps over the lazy dog". Or how about "\u0000\u0000". That is not code,
as generally understood.

~~~
TeMPOraL
> _" Code" requires some notion of an execution platform/environment, which
> does not exist for arbitrary data._

Arbitrary data don't exist without some notion of an execution (or
interpretation) platform.

We tend to use "code" as a word for "commands telling some execution process
what to do" and "data" as a word for "information that is meant to be
transformed" but in reality this distinction is meaningless; both are
fundamentally the same thing, and even our "code" vs. "data" words have blurry
borders. It's very apparent when you start reading configuration files. For
example, aren't Ant "configuration files" essentially programs[0]?

We all know what we usually mean in context by saying what is "code" vs. what
is "data", but one has to remember, that in fact _they are the same_ \-
minding it leads to insights like metaprogramming. Forgetting about it leads
to dumb languages and nasty problems, and is generally not wise.

[0] - the answer is: yes, they are, see
[http://www.defmacro.org/ramblings/lisp.html](http://www.defmacro.org/ramblings/lisp.html)
for more.

ETA:

Questions to ponder:

\- are regular expressions code, or data?

\- is source written in Prolog code, or data?

Also I recommend watching
[http://www.youtube.com/watch?v=3kEfedtQVOY](http://www.youtube.com/watch?v=3kEfedtQVOY)
to learn how what would be data, as defined by formal grammars of some real-
world protocols, can - by means of sloppy grammars and bad parser
implementation - cross the threshold of Turing-completeness and become code.

~~~
defen
I understand all this. Like many people, I've written programs in C++
templates. But I think we're talking past each other because you want to make
a pedantic point. I'm using the words as they are generally understood, not in
a technical computer science way. I'm talking about first-level stuff, not
metaprogramming. Let me give you some questions to ponder:

\- Is the text of _Hamlet_ code?

\- Was it code as soon as Shakespeare wrote it?

\- If not, did it become code once the electronic computer was invented? Or
did that happen once a version was stored in a way accessible to an electronic
computer?

\- Did all the existing paper copies immediately become code at that point as
well?

~~~
dragonwriter
> Is the text of Hamlet code?

> Was it code as soon as Shakespeare wrote it?

Yes, the text of a play is code meant to be executed by humans.

~~~
defen
That is pretty funny, but not what I was going for :)

------
jmcdonald-ut
I'm sure there are counter points to what I'm about to bring up, but three
observations:

1\. In my experience JSON is frequently output programmatically, and taken in
programmatically. Comments are not useful in these cases.

2\. The only time comments could be perceived as useful then would be when
parsing JSON by eye or hand. However, it is not difficult to parse JSON and
understand it unless the keys have used obfuscated names. If key naming is
obfuscated, comments aren't really the correct solution.

3\. "An object is an unordered set of name/value pairs", as mentioned by
jasonlotito and others earlier. There is no guarantee that a JSON parser will
give you the right value if there are two of the same keys in the same scope.

~~~
k3n
Re: #1

I know there is a lot of JSON handling that happens behind-the-scenes, but
there is also a non-trivial amount of JSON that I have manually created and/or
altered, and have to share with a team.

It's a blessing and a curse, these modern NodeJS projects -- it's awesome that
I can simply create/modify a .json file with a few properties, run a command,
and magic happens. However, if I want to try and communicate out the intent of
the values to my team of 20+, it becomes really convoluted. The projects all
magically work by looking for foo.json, but if I comment that file then it
breaks.

So I have to create another foo.comments.json, add another script that will
remove the comments and then call the original instructions. Then I need to
create additional documentation instructing the team to ignore the developer's
docs regarding native use, and to run the application with our own homebrew
setup.

It also can make testing a pain in the ass, because now I can no longer
comment out values, I have to remove them completely. Not a huge deal,
annoying nonetheless.

~~~
chmike
Why not adding an object field with identifier a_comment:"blabla..."

The advantage I see in this way of commenting is that the comment becomes
accessible inside the program instead of being stripped off by the parser. For
the human reader it's also more obvious.

Unfortunately, it's not possible to add comment to anything else than objects.
But the OP's proposal as well.

~~~
k3n
Why have comments in code at all, then? You could always just make a
variable/constant, with the added benefit that the comment becomes accessible
inside the program...

But that makes no sense at all to me. I agree that using comments as
metadata/directives is typically an antipattern hack, but what about for non-
metadata comments? Embedding comments into code is just as ass-backwards as
embedding code into comments. Neither is right.

> For the human reader it's also more obvious.

Strongly disagree here -- if I open a file that I've never worked in before, I
have faith that the comments were meant specifically for me. Likewise, I
assume all code in the file is _not_ for me (on account that I'm not a
compiler/interpreter/etc.).

------
CanSpice
Given the RFC says "The names within an object SHOULD be unique", there's
nothing stopping me from writing a parser that takes the first name/value pair
and throwing all the others on the floor. Or even better, picks a random
name/value pair when the same name appears. Both of these behaviours are
allowed by the RFC, and would break this hack.

Putting comments into JSON in this way is a hack and shouldn't be used by
anybody who has any interest in writing maintainable software. Relying on
ambiguities in an RFC and someone saying "JSON parsers work the same way" is a
good way to end up with a really obscure bug in the future.

~~~
serichsen
At least in ECMA-262 5, Ch. 15.12.2, there is a NOTE: "In the case where there
are duplicate name Strings within an object, lexically preceding values for
the same key shall be overwritten."

It still does not feel right.

------
adamtj
This is misguided. You don't need comments in a JSON config file. Why? Because
you don't use JSON for config files that need comments.

JSON is like duc(k|t) tape. It's really easy to stick two things together with
it. That doesn't mean you always should. It's the simple thing that gets the
job done so you can focus on what matters.

One shouldn't pick JSON for your config files and then hold it up as good
design. "Look at me, I'm daring and _not using XML_!" Using JSON is crap
design, but good engineering means sometimes picking something crappy and not
wasting effort on things that don't matter in the end.

If your configuration files become both complicated and important enough that
you need comments, then you should stop using JSON. If your duck tape job
starts needing additional reinforcement, then you should probably just get rid
of the duct tape and do it right.

If one of your requirements is a sufficiently trendy yet commentable config
language, look into YAML. Also, gaffer tape. The white kind is easier to write
on.

~~~
glhaynes
If crap design like JSON is the right engineering choice sometimes (and I
agree that it is), that seems like an argument that adding comments in this
crappy way may sometimes be the right engineering choice.

~~~
IanCal
Relying on undefined behaviour in a parser for comments is something I find
quite hard to define as "the right engineering choice" in any situation.

------
nonchalance
The JSON RFC
([http://www.ietf.org/rfc/rfc4627.txt?number=4627](http://www.ietf.org/rfc/rfc4627.txt?number=4627))
says

    
    
        The names within an object SHOULD be unique.
    

SHOULD is defined
([http://www.ietf.org/rfc/rfc2119](http://www.ietf.org/rfc/rfc2119)) as

    
    
        3. SHOULD   This word, or the adjective "RECOMMENDED", mean that there
           may exist valid reasons in particular circumstances to ignore a
           particular item, but the full implications must be understood and
           carefully weighed before choosing a different course.
    

Salient point is that you would need to ensure that you are only using JSON
parsers that tolerate duplicate names (and use the last value)

~~~
IanCal
> Salient point is that you would need to ensure that you are only using JSON
> parsers that tolerate duplicate names (and use the last value)

To drive this home a bit more forcefully, it requires knowing the behaviour of
your parser where it is marked as "undefined" in the spec.

If that isn't enough to stop you, DON'T USE JSON. A patch level change in a
library could break your code in a non-obvious way _and it would be your
fault_. If you want comments, DON'T USE JSON, JSON DOESN'T HAVE THEM.

~~~
bzbarsky
Note that if your parser is the ES-standard JSON.parse, then the behavior here
is in fact defined by ES5 section 15.12.2, even with duplicate names.

------
NathanKP
This hack, while nice, is still just a work around. I highly recommend that if
you can, in as many places as possible use YAML instead of JSON.

JSON works great for on the fly communication with frontends that are running
JavaScript, or for communication between JavaScript processes like Node.js
servers. But for configuration files and other things that need comments YAML
is many times better, both for it's clean, Markdown reminiscent structure, and
its native comment support.

Node.js has a great module called js-yaml ([https://github.com/nodeca/js-
yaml](https://github.com/nodeca/js-yaml)) which automatically registers
handlers for .yml and .yaml files, allowing you to require them in your
Node.js code just like you can with JSON files.

It also comes with a YAML parser for the browser side of things, so if you
want you could even communicate YAML directly from the server to the client
side, although frankly I don't see much advantage to sending YAML over the
wire instead of JSON. (And as others have mentioned below untrusted YAML
sources could insert malicious objects in YAML, so I wouldn't recommend this
technique.)

You can even use YAML for your package.json in a Node program:
([https://npmjs.org/package/npm-yaml](https://npmjs.org/package/npm-yaml))

~~~
wmil
YAML is neat, but library developers have a history of writing unsafe YAML
parsers.

There's the famous Rails vulnerability due to YAML. Python needed to add
'yaml.safe_load'.

YAML is a little too rich. It's always one poorly thought out convenience
feature away from disaster.

~~~
ygra
And JSON was often “parsed” with eval().

~~~
krapp
That's not really a problem with JSON though is it? Anything you run through
eval() is a disaster in the making. Maybe the problem is that people are
trying to make data formats too powerful, and too many things seem to be
creeping towards Turing completeness that don't need to be.

I think parsers for JSON and Yaml, INI etc should be designed in such a way as
to make it impossible to assign anything like an object, class, function, etc.
Numbers, strings, and collections of numbers and strings... that's all you
should get (though obviously "string" is frought with peril.) Anything more is
unnecessarily complex.

~~~
dehora
It is a problem with JSON in the sense that it's a JavaScript subset, 'in
practice' \- modulo the Unicode support that goes beyond JavaScript. So it's
to be expected that eval() will be used as a convenience by developers,
ignoring the security implication that comes will eval() hoisting full
JavaScript.

The way to have avoided the issue would have been for JSON to have a grammar
that broke eval(). But one could argue the ability to pass JSON into eval() to
get JavaScript is one of the reasons JSON became popular to begin with.

------
hosay123
This would completely break any event driven (streaming) parser.

~~~
the_gipsy
Or a parser that simply discards existing keys.

~~~
IanCal
Which, importantly, would be perfectly fine according to the spec (as I
understand it).

~~~
masklinn
Indeed, the spec states that keys SHOULD be unique (with RFC 2119 meaning) and
leaves behavior unspecified in case of duplicate key.

~~~
IanCal
My favourite example of dealing with undefined behaviour is this:

In practice, many C implementations recognize, for example, #pragma once as a
rough equivalent of #include guards — but GCC 1.17, upon finding a #pragma
directive, would instead attempt to launch commonly distributed Unix games
such as NetHack and Rogue, or start Emacs running a simulation of the Towers
of Hanoi.[7]

Source:
[http://en.wikipedia.org/wiki/Undefined_behavior](http://en.wikipedia.org/wiki/Undefined_behavior)

------
jasonlotito
My first thought in seeing this was that objects aren't guaranteed to maintain
order: "An object is an unordered set of name/value pairs" \-
[http://www.json.org](http://www.json.org)

~~~
jfoutz
There is an intrinsic order in the text though. it's up to the parser to keep
clobbering a value every time a new value comes in for a given key.

This seems like a bad idea. It seems heavily reliant on edge case behavior.
But hey, might work well for the original author.

~~~
IanCal
> it's up to the parser to keep clobbering a value every time a new value
> comes in for a given k

Nope, parsers are perfectly in their rights to do whatever they want with
multiple keys. They could read them backwards, sort them, whatever. The
behaviour in the instance of multiple keys is undefined.

> This seems like a bad idea.

It is an astonishingly bad idea. I'm concerned by it being so high on the
page.

> But hey, might work well for the original author.

Depends on their parser. It's undefined behaviour according to the spec. It
might work now, but I'd argue it doesn't work well, as a patch level change
could bork this.

~~~
jfoutz
I'm not so sure. I think, JSON falls back to the ecma script standard for
specific details. The object initializer semantics seem to force a left to
right evaluation order, in the ecma spec around page 65. I'll admit my claim
was unfounded when i made it, and I only went to the spec to avoid being wrong
:) If I were to implement a JSON parser, I would now feel obligated to eval in
order, due to my reading of the spec.

However, I think we wholeheartedly agree, don't rely on this behavior. It is
an outright strict mode error.

------
JulianMorrison
This definitely qualifies for a Zen style thwack over the head with a stick
and a reprimand of "stop being clever!"

------
varikin
This sounds great until some parser uses the comment definition instead of the
value. Is it defined in the spec that parsers need to use the last defined
value for a key?

~~~
dak1
Since the order of an object's keys is not guaranteed, it seems like even if a
parser respected the last-defined rule, you could still potentially end up
with the wrong field last.

------
avolcano
Can we all just agree, as a community, to add comment support to our JSON
parsers? Hell, I'd do a PR on V8 if I knew C++.

It's ridiculous that I can't document notes on dependencies in my NPM
package.json, or add a little reminder to my Sublime Text configuration as to
why I set some value, because we're using JSON parsers that can't handle the
concept of ignoring a line with a couple slashes prefixing it.

IMO - either we add comments to JSON, or we stop using it for hand-edited
configuration.

~~~
phpnode
Crockford's rationale for not supporting comments is that people use them to
add meta data to the object (e.g. type annotations) which makes it hard to
consume with different parsers.

~~~
avolcano
Trusting the community to do the right thing is better than handicapping your
users.

Regardless, of course, people add metadata to JSON already - there's zero
reason you can't "_type": "int". It's a completely arbitrary reason.

~~~
phpnode
right - but that is valid syntax! any json parser can understand that, and
that's what he recommends doing instead. But if you're doing this in comments,
you end up writing your own mini language to describe your annotations, and
nothing else knows how to parse it. that should clearly be avoided.

~~~
esailija
If JSON had comments, then of course any JSON parser could understand those
comments just as well as they can currently understand "_type": "int". What am
I missing?

~~~
masklinn
That because they're comments specific JSON parsers could (and likely would)
interpret processing instructions embedded in those comment to toggle
behaviors on the fly. Crockford's fear (founded I think) was that comments
would be used to "extend" json.

~~~
esailija
Yes I get that. What I don't get is how "_processing_instruction": "whatever"
is any different.

~~~
phpnode
when you do

    
    
        {
            "_type": "int",
            "foo": "123"
        }
    

a JSON parser will always know how to represent that object. How you process
that object is up to you.

However, when you write:

    
    
        {
            // @type int this is a comment
            "foo": "123"
        }
    

and you call JSON.parse(), what would you expect to get back? You can no
longer represent it as a simple object, you need some way to access the
comment, how do you do that? Moreover, whose responsibility is it to process
the annotation in that comment? the parser's? Should you get back an integer
rather than a string for obj.foo? how would you support different types of
annotation? What happens if you're using parser A and your client uses parser
B? Does parser B support all the annotations that parser A supports? If you
need to modify a JSON structure, e.g. JSON decoding, adding a property and re-
encoding, should the comments be preserved? ...

You can see that having comments introduces a whole host of other questions,
ambiguity and would only make it harder for different platforms to share data.
Avoiding this kind of cruft is why JSON is winning vs XML for most things
these days.

~~~
esailija
In the eyes of compliant parser (assuming JSON supported comments) it is just
a comment like "_type": "int" is just a key-value pair.

However, when using ad hoc parser, then all bets are off what the result is in
both cases again, not just the comment case. Regardless of comment support in
JSON the same problem appears to exist.

------
julius
Funny story. JSLint[1] does not approve of this technique. I asked Crockford
to implement the duplicate check in April 2009 via email. 20 minutes later,
out of nowhere, he was done implementing that check and wrote back "Please try
it now."

This guy is fast. Especially nice considering we do not know each other at
all.

[1] [http://www.jslint.com/](http://www.jslint.com/) \- JS checking tool from
the inventor of JSON

~~~
WayneDB
I sent him an email once asking for the same JSLint license that he gave to
IBM (you know, the one without the "do not use this for evil" clause.)

He responded that he was getting annoyed by everybody asking for this, so it
was going to cost me $100K to obtain such a license.

I responded that I only asked for that license in order to annoy him (and
thanks for the confirmation that it worked), because his immature license
clause is annoying everybody else.

------
kalleboo
Note that these comments would disappear the second you use a JSON-aware tool
to manipulate one of these files.

~~~
mtkd
You hope it is the comment dupe that disappears and not the field you want.

------
kstenerud
Instead of using tricks that rely on parser implementation behaviors, why not
just put an actual comment field in the object?

    
    
        {
            "myvalue_comment": "This is a comment",
            "myvalue": 42
        }

~~~
MatthewPhillips
That example is fine, but you wouldn't want a long comment getting loaded into
memory because the parser doesn't know any better.

~~~
dnautics
for that matter, just do:

{

    
    
      "comment":"this is a comment";
      "value": 45;
    
      "comment":"this is also a comment";
      "value2": 64;
    
      "comment":"we like overloading the comment field";
      "stringval":"but these stay the same";
    
    }

~~~
IanCal
Then the parser might fail, and rightly so. A comment lower down shows it
failing in a simple parser in go:
[https://news.ycombinator.com/item?id=6147478](https://news.ycombinator.com/item?id=6147478)

Keys SHOULD be unique.

~~~
dnautics
SHOULD != SHALL

And, no, this scheme doesn't break the go parser because there isn't a
typeshift between the "comment" fields, they are all strings.

[http://play.golang.org/p/bxcIIyAeph](http://play.golang.org/p/bxcIIyAeph)

~~~
IanCal
Ah fair enough on the break, I was wrong there.

But if the spec says that keys SHOULD be unique, what's the behaviour when
they aren't?

~~~
dnautics
I would agree it's very hackey and probably not a good idea since the spec is
liable to change. But I wouldn't be sad if the spec were changed to allow for
this, or to allow for comments.

~~~
IanCal
> I would agree it's very hackey and probably not a good idea since the spec
> is liable to change

Well it's not really about the spec changing, the spec doesn't have a defined
behaviour for duplicate keys.

> But I wouldn't be sad if the spec were changed to allow for this, or to
> allow for comments.

I don't think duplicate keys should be allowed, but I've no strong feelings on
comments. I don't think there's any real need for them though.

~~~
dnautics
I'm biased, back in the BeOS -> Haiku days, I was wanting some sort of
configuration textfile that would neatly be able to be parsed into a BArchive
object (and presumably transmitted into a BMessage). XML was all the rage at
the time, so I wrote for myself a sort of XML-ish format, but I never
contributed it to the tree. I learned the problems with XMLs (should it be an
attribute or an innerText?). I wanted something with a bracket notation, but
JSON had not been discovered by crockford yet, if it had been I would have
gotten more involved and tried to have it be adopted.

------
nrivadeneira
Terrible spec-violating hack aside, the idea of the author soliciting upvotes
on StackOverflow doesn't sit well with me. I'd hate for SO solutions to become
diluted by answers from users who are 'marketing' for upvotes.

------
jgeerts
It is a 'hack' as discussed in the article and I will probably never use it.
JSON should be either self explanatory or documented, I don't see any reason
why you would add this unnecessary clutter to these messages.

It is already hard to read as is and it's making it worse to read and
confusing, if some big service would start using this, you would have to know
about this 'hack' otherwise he would have to look up what the hell is going
on.

Also, this is the same information for each call and thus redundant, makes
your messages larger when an advantage of JSON is that it's generally a small
message.

------
JOnAgain
This, to me, looks like an example of relying on a nondeterministic
implementation. To my knowledge, the standard doesn't prescribe that parsers
take the second/last of a duplicate key. As a result, this is relying on
implementation-specific choices which can lead to a terrible upgrade process.

Switch to a different JSON parser, does it still work? probably. but I
wouldn't bet that much.

If I were implementing a JSON parser, might I throw an error on a duplicate
key? maybe. Maybe I would just print a warning?

If I were every going to give someone advice it would be to never do this.

------
asnyder
You should use standard JS comments and process them out. Douglas Crockford's
offical answer on comments,
[https://plus.google.com/118095276221607585885/posts/RK8qyGVa...](https://plus.google.com/118095276221607585885/posts/RK8qyGVaGSr).
Essentially just process them out beforehand with something like jsmin, pretty
straightforward.

------
sktrdie
This is a horrible hack. You should use JSON-LD [1] to describe the fields of
your JSON. It's a W3C standard!

Also, it's _not_ defined in the JSON standard in which order an implementation
needs to parse the JSON fields/keys. So you could end up with potentially
wrong results!

1\. [http://json-ld.org/](http://json-ld.org/)

------
basicallydan
This is a nice trick, but probably only should be used in systems where the
set people touching the code is a limited, rarely-changing set of people and
anything using the JSON is strictly going to treat the last defined value as
the value to use. Dragons lurk elsewhere!

------
peterkelly
> Believe it or not, it turns out JSON parsers work the same way

 _Please_ don't do this. There's almost certainly some parsers out there
currently that don't work like this, and if not, there likely will be one day.

------
rcarmo
I do something else that is a lot more readable:

    
    
        { 
          "#": "this is a comment for the next line",
          "url": "http://foo.bar"
        }
    

Simple.

~~~
IanCal
Hopefully you don't use the same key multiple times, as that's not guaranteed
to work in different parsers.

------
zemo
if I ever saw this in a project, I would remove those comments in a heartbeat.
The behavior here is specific to the json parser. JavaScript is not the
entirety of programming.

It does break the json parser in the Go standard library, in a totally
nonobvious way:
[http://play.golang.org/p/BsDd47vWna](http://play.golang.org/p/BsDd47vWna)

I would be surprised if it doesn't break many parsers, especially json parsers
in static languages. If you want that sort of behavior, don't use json.

------
znmeb
This is a celebration of programmers' ability to generate unmaintainable code
by exploiting implementation dependencies. People get _fired_ for pulling this
horseshit every day!

------
M4rkH
A common practice in config files is to comment out whole sections e.g.
optional proxy server settings. This sort of multi-line comment is not
addressed by this hack

------
kgabis
Well, here we go:
[https://github.com/kgabis/parson/issues/7](https://github.com/kgabis/parson/issues/7)

------
wickedlogic
Don't use them, there is no such thing. Make your comments first class
citizens in the data.

------
lttlrck
Nice hack but fails JSHint.

[1] [http://jshint.com/](http://jshint.com/)

------
opminion
JSON has comments already. It just requires you to decide what the comment
marker is.

------
quantumpotato_
I thought JSON is mainly for machine to machine consumption.. who reads
comments?

------
knodi
This is a recipe for disaster.

------
davidradcliffe
Neat trick! Not sure I'd trust it, and might be confusing for anyone reading
who didn't know this.

------
8ig8
That seems pretty fragile.

