

JSON Support in SQL Server 2016 - us0r
http://blogs.msdn.com/b/jocapc/archive/2015/05/16/json-support-in-sql-server-2016.aspx

======
biot
If I'm reading this correctly, the unfortunate part about using nvarchar
rather than a native JSON data type is that it's entirely possible to store
malformed JSON into the database, thus putting the responsibility for
validation on your business logic. When querying, you need to exclude
malformed JSON by using the ISJSON() operator.

I get the argument that nvarchar makes it work with all tooling, but one could
make the argument that if your tools don't support JSON already then perhaps
they need to get with the times.

Imagine if SQL Server didn't have any date types and you had to query an
nvarchar data type with ISDATETIME() and then get the year via
DATE_VALUE(t.OrderDate, '$.year'). Seems a bit too hackish.

~~~
bjg
The article points out that you can use their ISJSON function as a check
constraint on your column, guarding it from malformed data:

> ISJSON( jsonText ) that checks is the NVARCHAR text input properly formatted
> according to the JSON specification. You can use this function the create
> check constraints on NVARCHAR columns that contain JSON text.

~~~
__Joker
Not to nitpick. But you want this validation to be default. In this case which
it can't be.

Question is what stopped MS to just wrap this NVARCHAR with json validation
and json access and give it as json type. May be native json type will be
added in future versions. I think just they wanted to ball to rolling.

~~~
taspeotis
> Question is what stopped MS to just wrap this NVARCHAR with json validation
> and json access and give it as json type

A large part of the blog post [1] is dedicated to this issue?

> In SQL Server 2016, JSON will be represented as NVARCHAR type. There are
> three reasons for this:

[1] [http://blogs.msdn.com/b/jocapc/archive/2015/05/16/json-
suppo...](http://blogs.msdn.com/b/jocapc/archive/2015/05/16/json-support-in-
sql-server-2016.aspx)

------
ahachete
Surprised that they mention PostgreSQL. Surprised even more that they seem to
admit PostgreSQL is more advanced now than what they are releasing next year.

However, they incorrectly mention that PostgreSQL supports BSON. It doesn't.
It rather has a data type called "jsonb", which is a binary JSON
serialization, but has nothing to do with BSON (which happens to be MongoDB's
serialization format, and a relatively poor one, by the way).

Edit: blog post has been updated by the author to reflect that the type is now
"jsonb" rather than BSON. Thank you!

~~~
irrigation
As someone who extensively uses pgsql and SQL Server, I don't think it's fair
to say that PostgreSQL is "more advanced" because it has one particular,
somewhat trendy feature. There are many critical features that pgsql either
lacks (query parallelization), or came many years after SQL Server (index-only
scans, true materialized views, etc).

And really json in the database is just the more modern version of xml in the
database, which of course SQL Server has supported for years. The problem with
xml in the database was not a fundamental one, but rather an implementation
issue -- where the json model in most solutions is some variation of toss-
some-structured-data-in and then work with it, the XML model of SQL Server
required significant poorly documented, confusing to implement configuration
to work with in any meaningful way. It really killed the feature.

Personally I still think XML is superior to JSON, but it got usurped by the
architectural astronauts who kept layering noise on it to the point of being
unusable.

~~~
karmajunkie
json's been around for over ten years and is the default serialization format
for most web frameworks. To call it trendy in the same breath as singing the
praises of XML is a bit gauche.

~~~
irrigation
JSON is used for persistence in a limited number of projects, and still lacks
fundamental things like a validation schema or even a _date_ type. Yes, it
absolutely is "trendy", and it absolutely pales compared to XML. They both
fill virtually the identical requirement, one just slowly repeating all of the
mistakes of the other, as history tends to repeat itself.

~~~
rpedela
JSON is an improvement overall even though there are certainly issues. It is
easier to read, easier to parse, and usually fewer bytes for the same data.
Validation schema can be provided by JSON Schema [1]. The lack of date type
and comments is annoying, but not that big of a deal. I think parsing and
reading are far more important to get right which XML didn't.

1\. [http://json-schema.org/](http://json-schema.org/)

~~~
WorldWideWayne
> but not that big of a deal. I think parsing and reading are far more
> important to get right which XML didn't.

...in your opinion, for your use cases. I think XML is eminently more readable
due to the fact that it has named types and also because I never have to read
the following and figure out what goes where:

    
    
        }}}}}]]}}]]}}
    

JSON is great for sending data to Javascript though and I'm pretty sure that's
the only reason anyone is using it.

~~~
rpedela
You have the same problem in XML with that amount of nesting. Then you pretty
print the XML or JSON and problem solved.

------
drawkbox
Postgres became our goto RDMBS because of JSON support. It is really sad it
has taken Microsoft this long to get JSON native database support. It makes
using it for .NET projects more difficult than competitors or NoSQL databases.
When clients pick .NET we always get a little annoyance every time we need to
use JSON in the database which we solve with caching/redis/mongo/file
io/compression depending on the situation.

JSON support has lagged in all areas of Microsoft development platforms. They
didn't even embrace it or most .NET developers weren't even using is or knew
what it was until WCF in 2006-2007 when it had been heavily used for years.
Newtonsoft really pushed it in the Microsoft/.NET world which is awesome. 10
years later after rest web services and SOAP went away in favor of REST/JSON,
it is finally getting into MS SQL Server. MS SQL Server had support for XML
searching early, poor showing in JSON support but glad they are getting it
done.

------
reagan83
SQL Server 2005 and newer continued to invest in the XML datatype which was a
poor product choice in hindsight.

While I applaud this product feature, I'm not convinced of the value of
storing JSON in a relational database (any more than storing XML).

~~~
baq
turns out relational databases make for quite good nosql datastores, it just
takes a little bit of thought instead of jumping onto a bandwagon of the
month.

~~~
M8
SQL =>

Step 1: remove data integrity.

Step 2: remove powerful query language.

Step 3: only store strings.

=> No-SQL

It's interesting that the technology is named after a missing feature.

~~~
dorfsmay
As someone who had to do aggregation on hstore... Gosh! Why? It's not that
difficult to add a column!

------
azinman2
Looks a lot like what Postgres originally did with their Jason support pre-9.4
where it got significant beef ups.

------
Olap84
But can it parse and output a proper CSV yet?

------
manigandham
What's the best use case for this? I've seen lots of requests but I'm
struggling to find a really good scenario beyond just avoiding the work of
normalizing into regular DB tables and all the schema management that comes
with it (which isn't necessarily a bad thing).

Any top level fields can just be made columns and we already have 1:M and M:M
mappings done well in relational tables, is it purely just being able to
serialize into 1 field and then query on that? Why not just use a document
database then?

------
NDizzle
Leaving out JSON indexes seems like a tremendous oversight.

~~~
viggity
if you really need indexes, extract the fields from your json and keep them as
additional columns in the table. Key value stores have always had problematic
indexing on anything that wasn't the key. RDBMS does indexing really well,
lean on that side for what it is good for.

~~~
est
> extract the fields from your json and keep them as additional columns in the
> table

You can do that only for simplest key-value json, not the more complicated
ones IRL.

E.g. an array

ID, book_title, book_intro, book_tags

here book_tags is a json array. Now try index that!

Mongodb could index it, full text could index it, pg 9.4+ could, but not other
RDBMS.

Support nested data structure in RDBMS is hard. You have to implement
flat/unflat voodoo in a weird & lame, non-SQL DSL

~~~
raphaelj
> here book_tags is a json array. Now try index that!

You just need to create a new table which links tags to books. That's how
relational databases are normalized ...

~~~
est
Expanding a billion row table into a hundreds of billions of rows table is not
really a good idea

MongoDB handled this really well, create two index entries pointing to the
same row.

~~~
blumkvist
w00t? A billion attributes on hundreds of billions of rows? Could you think
more of an edge case? You're just coming up with shit so you can show off how
l33t you are. If you have that kind of a need, just go for a NoSQL server,
nobody is stopping you. This feature is a nicety to have, not a replacement of
NoSQL.

------
Avalaxy
What's the use of this? Something like JSON.NET makes it extremely easy to map
between JSON and objects and vice versa. That way you'll only store the useful
data and not all the JSON markup.

------
ffk
At first sight, this looks like a competitor to json based data stores such as
mongodb. However, it looks like json is just a line format and you still need
to write T-SQL.

~~~
taspeotis
Well Entity Framework 7 is meant to have support for non-relational data
sources [1] so ... at some point you probably won't have to write Transact-SQL
for dealing with JSON data in your cookie-cutter OLAP applications written
with .NET.

[1] [https://msdn.microsoft.com/en-
us/magazine/dn890367.aspx](https://msdn.microsoft.com/en-
us/magazine/dn890367.aspx)

