
JSON Alternative – Internet Object - dsego
https://internetobject.org/
======
snorremd
Internet Object, from my limited read-through, seems to be order sensitive in
a way that JSON is not. If you define a schema at the top with the available
properties in some order then the values must follow that same order.

And what happens with sparse data? In JSON you simply omit the properties (key
value pairs) from the JSON object all together when there is no data for the
given property. For Internet Object you presumably need to insert "null"
values in the column you have no data for?

I can see how omitting the "header" names from each "object" in a vector/array
of objects can save space, but there are no mentions of trade-offs. And if you
gzip a JSON body then the compression algorithm will get rid of those repeated
bytes in any case, at the expense of some additional CPU resources being used.
Running the example data through gzip proves this:

Original sizes:

\- io: 51K

\- json: 88K

gzip sizes:

\- io: 20K

\- json: 21K

The more interesting tidbit was the embedded schema for validation of data I
think. The splitting "headers" and data seems less important. I also think
maybe showing some speed comparisons of serializing and deserializing internet
object and json data would be nice. Though this would necessarily depend on
using the same programming language and writing an optimal internet object
serializer to get a fair comparison.

This comment might come off as overly negative or disparaging. Kudos to the
author for coming up with a new data format to solve an observed problem.

~~~
jbergens
And if it uses a specified order and requires a schema we might as well use
ProtoBuf which is probably faster.

------
rumanator
The only added value I see is the schema, but is the schema relevant at all? I
mean, JSON documents are parsed by clients, and the robustness principle
implies that the client is free to follow any schema that they see fit.

[https://en.wikipedia.org/wiki/Robustness_principle](https://en.wikipedia.org/wiki/Robustness_principle)

Other than this, other document formats such as TOML haven't seen much action
in spite of being arguably simpler and requiring less data than JSON.

~~~
Mikhail_Edoshin
JSON is also sent back by clients to server and the principle there is "don't
trust the client", so any way to mechanically validate at least the syntax
would be helpful.

------
fiedzia
> name, age:{int, min:20}, address: {street, city, state}

why "int" means type, and "street" does not?

> active?:bool

it seems that this is legal:

active?: bool, someattr?: bool T

now which one was set?

This might be very useful replacement for CSV, but not JSON.

------
ktpsns
> JSON not only mixes key/values and lacks schema; it mixes data and headers
> (or metadata for that matter) too.

There is no concept of headers in JSON. And neither is there in the proposed
Internet Object. The (ugly?) example of [https://internetobject.org/the-
story/](https://internetobject.org/the-story/) can be written in any data
format (thinking of Yaml) and obviously also in the "Internet Object".

There are reasons to blame JSON for, such as the lack of comments. But the
lack of schema is plain wrong -- there is JSONschema and there are validators
which can be perfectly used when recieving data over the web, for instance.

------
alexchamberlain
The innovation here appears to be a text based format that consolidates the
keys used across a list of objects with a common structure; that seems pretty
cool!

------
ironfootnz
Just a typed JSON and with that faster serialization? Not to mention where is
the compression comparison?

~~~
scottmf
With gzip they're almost the same size

[https://github.com/maniartech/InternetObject-vs-JSON-
benchma...](https://github.com/maniartech/InternetObject-vs-JSON-
benchmark/tree/master/data)

IO: Original size: 51348 bytes / Compressed size: 20021 bytes

JSON: Original size: 89873 bytes / Compressed size: 22529 bytes

------
tjpnz
Friendly reminder - if you don't serve browsers you shouldn't be using JSON or
text based formats like what's being proposed here.

~~~
ktpsns
Are you aware that text based formats are dominating in unixoide operating
systems (such as Linux) for configuration? Especially, the ecosystem of an
average GNU Linux distribution is famous for having a vast number of different
languages, i.e. "every daemon has his own configuration language". Standards
such as YAML, XML, JSON, etc. help a lot to make that easier.

Also, these file formats are useful for any kind of application interface, far
away from world wide web only. Such as serialization interfaces between
different programs, over network or not. This is something where JSON excels.

