
JSON Feed - fold
https://jsonfeed.org/2017/05/17/announcing_json_feed
======
mstade
> JSON Feed files must be served using the same MIME type — application/json —
> that’s used whenever JSON is served.

So then it's JSON, and I'll treat it as any other JSON: a document that is
either an object or an array, that can include other objects or arrays, as
well as numbers and strings. Property names doesn't matter, nor do order of
properties or array items, or whatever values are contained therein.

Please don't try to overload media types like this. Atom isn't served as
`application/xml` precisely because it _isn 't_ XML; it's served as
`application/atom+xml`. For a media type that is JSON-like but isn't JSON, you
may wish to look at `application/hal+json`; incidentally there's also
`application/hal+xml` for the XML variant.

Or as someone else rightly suggested, consider just using JSON-LD.

~~~
anshou-
It's worth pointing out that any valid JSON value is a valid JSON document.
There is no requirement or guarantee that an array or an object are the top-
level value in a JSON document.

"I am a valid JSON document. So is the Number below, and in fact every line
below this line."

4

null

~~~
jameshart
_Actually_ actually... the JSON spec doesn't define the concept of a JSON
document. Neither [http://www.json.org/](http://www.json.org/) nor
[http://www.ecma-international.org/publications/files/ECMA-
ST...](http://www.ecma-international.org/publications/files/ECMA-
ST/ECMA-404.pdf) actually specifies that a JSON 'document' is synonymous with
a JSON 'value'.

Now it's also true that JSON doesn't specify an entity that can be either an
object or an array but _not_ be a string or a bool or a number or null. So
it's kind of true that JSON doesn't say that an object or array are valid root
elements.

 _But_ JSON also says "JSON is built on two structures" \- arrays and objects.
It defines those two structures in terms of 'JSON values'. But it's a
reasonable way to read the JSON spec to say that it defines a concept of a
'JSON structure' as an array or object - but _not_ a plain value. And then to
assume that a .json file contains a JSON 'structure'.

Basically... JSON's just not as well defined a standard as you might hope.

edit: And now I'm going to _well actually_ myself: Turns out
[https://tools.ietf.org/html/rfc4627](https://tools.ietf.org/html/rfc4627)
defines a thing called a 'JSON text' which is an array or an object, and says
that a 'JSON text' is what a JSON parser should be expected to parse.

So - pick a standard.

~~~
d0mine
Why are you referencing the obsolete rfc? There is no restriction to
object/array for the JSON text in the current rfc
[https://tools.ietf.org/html/rfc7159](https://tools.ietf.org/html/rfc7159)

~~~
dwaite
The current RFC recommends the use of an object or array for interoperability
with the previous specification. JSON being a bit of a clusterf __* of
variants, they tried to make the RFC broad then place interoperability
limitations on it. (lenient in what you accept, etc etc)

------
mindcrime
Do we really need this? Atom is fine for feeds. Avoiding XML just for the sake
of avoiding XML, because it isn't "cool" anymore is just dump groupthink.

If this industry has a problem, it's FDD - Fad Driven Development and IIICIS
(If It Isn't Cool, It Sucks) thinking.

~~~
oxguy3
To be honest, I'm really excited about the prospect of JSON based feeds. Right
now, there's no easy way to work with Atom/RSS feeds on the command-line (that
I know of anyway), which is something I often wish I could do. With a JSON
feed, I can just throw the data at jq
([https://stedolan.github.io/jq/](https://stedolan.github.io/jq/)) and have a
bash script hacked together in 10 minutes to do whatever I want with the feed.

~~~
falcolas
I give you libxml:

    
    
        xmllint --xpath '//element/@attribute'
    

There's a good chance it's already installed on your mac.

~~~
hyperpallium
To avoid the hassle of handling xml namespaces (e.g. in an Atom _feed_...),
just do:

    
    
        xmllint --xpath '//*[local-name()="element"]/@attribute'
    

Note: for consistency, namespaces are not needed for attribute names.

[http://stackoverflow.com/questions/4402310/how-to-ignore-
nam...](http://stackoverflow.com/questions/4402310/how-to-ignore-namespace-
when-selecting-xml-nodes-with-xpath)

------
russellbeattie
For anyone who's tried to write a real-world RSS feed reader, this format does
little to solve the big problems the newsfeeds have:

* Badly formed XML? Check. There might be badly formed JSON, but I tend to think it'll be a lot less likely.

* Need to continually poll servers for updates? Miss. Without additions to enable pubsub, or dynamic queries, clients are forced to use HTTP headers to check last updates, then do a delta on the entire feed if there is new or updated content. Also, if you missed 10 updates, and the feed only contains the last 5 items, then you lose information. This is the nature of a document-centric feed meant to be served as a static file. But it's 2017 now, and it's incredibly rare that a feed isn't created dynamically. A new feed spec should incorporate that reality.

* Complete understanding of modern content types besides blog posts? Miss. The last time I went through a huge list of feeds for testing, I found there were over 50 commonly used namespaces and over 300 unique fields used. RSS is used for _everything_ from search results to Twitter posts to Podcasts... It's hard to describe all the different forms of data it can be contain. The reason for this is because the original RSS spec was so minimal (there's like 5 required fields) so everything else has just been bolted on. JSONFeed makes this same mistake.

* An understanding that separate but equal isn't equal. Miss. The thing that [http://activitystrea.ms](http://activitystrea.ms) got right was the realization that copying content into a feed just ends up diluting the original content formatting, so instead it just contains metadata and points to the original source URL rather than trying to contain it. If JSONFeed wanted to really create a successor to RSS, it would spec out how to send along formatting information along with the data. It's not impossible - look at what Google did with AMP: They specified a subset of formatting options so that each article can still contain a unique design, but limited the options to increase efficiency and limit bugs/chaos.

This stuff is just off the top of my head. If you're going to make a new feed
format in 2017, I'm sorry but copying what came before it and throwing it into
JSON just isn't enough.

~~~
hboon
FWIW, This is by Manton Reece and Brent Simmons. And Simmons is known (among
other things) as the creator of NetNewsWire which has been around for more
than 15 years. He does know a bit about Atom and RSS feeds.

[https://en.wikipedia.org/wiki/NetNewsWire](https://en.wikipedia.org/wiki/NetNewsWire)

~~~
nothrabannosir
Ok, I have no idea who these guys are so forgive me being rude: if they're so
good then why did they not address those points? to my eyes, op makes a solid
argument. I'd like to know their side of the story.

~~~
yoz-y
But they did...

> Badly formed XML? Check. There might be badly formed JSON, but I tend to
> think it'll be a lot less likely.

The problem with XML is mostly that it is a very complex format so the bugs
are more probable and there are more pitfalls.

> Need to continually poll servers for updates? Miss. Without additions to
> enable pubsub, or dynamic queries ...

They actually did add tags to enable WebSub (previously called pubsub) so
there goes that. For the other concerns, I think it is not the formats job to
care for partial or incomplete data. Nothing prevents you to have a dynamic
link with a "updatesSince" on your webpage and serve all of the articles that
were added or updated after that. Nowhere, the format specifies the limit on
number of items. It also incorporates paging out of the box so you could
bubble up any old articles.

> Complete understanding of modern content types besides blog posts? Miss.

The point of this is for the open web, by definition nobody can anticipate all
formats. Rather than fill the spec with tweets, facebook and other types, they
have opted for the least common denominator and added a specific way to add
extensions. This makes way more sense.

* An understanding that separate but equal isn't equal. Miss.

Nothing actually prevents you to leave the content fields blank and rely on
the reader to pull the format. But for this kind of usage there are other
methods. Personally I prefer content delivered in the RSS precisely to avoid
to have to deal with customization of content formatting. JSON feed HAS a way
to specify formatting though, it's called HTML tags. No need to reinvent the
wheel here.

~~~
russellbeattie
I don't agree with most of what you wrote, but the "it's called HTML tags" is
the most wrong. You must not have tried this any time in the past 5 years or
so. The embedded tags come out of CMSs and - when they're not stripped
completely - look like <div class="title-main-sub-1"> and <span class="sub-
article-v5-bld">. HTML isn't used alone, it's always used with CSS nowadays,
and no matter if semantic tags are best practice, the fact is it's optional
and regularly not used. If they're going to create a new _standard_ format,
they need to address this.

~~~
yoz-y
What is the difference between re-publishing the content in some other format
which will do formatting well and re-publishing the content using sensible
html tags with maybe some embedded minimal stylesheet?

There might be mis-use and abuse, but if you want to avoid that you can always
push markdown into the "text" representation.

------
CharlesW
Dave Winer (the creator of RSS) played with this a bit in 2012. It turns out
that exact format of feeds doesn't matter nearly as much as there being a
more-or-less universal one.

[http://scripting.com/stories/2012/09/10/rssInJsonForReal.htm...](http://scripting.com/stories/2012/09/10/rssInJsonForReal.html)

~~~
AceJohnny2
I'm sure there's...

oh of course: [https://xkcd.com/927/](https://xkcd.com/927/)

(and I realize this doesn't exactly map, as JSON Feed isn't even trying to
cover all the usecases of Atom or RSS, just switching the container format)

------
gedrap
But does it solve any actual problems other than 'XML is not cool', problems
big enough to deserve a new format?

It's true that JSON is easier to deal with than XML. But that's relative,
there are plenty of decent tools around RSS. From readers, to libraries in the
most common programming languages, and extensions in the most common content
management systems. JSON is slightly easier to read for human (although that's
subjective), but then how often do you need to read the RSS feed manually,
unless you are the one who is writing those libraries, etc. But that's a tiny
share of all people using RSS.

>>> It reflects the lessons learned from our years of work reading and
publishing feeds.

Sounds like the author(s) has extensive experience in this field and knows
things better than some random person on the internet (me). But the homepage
of the project doesn't convey those learned lessons.

~~~
tannhaeuser
Yes JSON is much easier to parse than XML, and is preferred _when it fits_
such as for most Web API requests and responses.

However, SGML and XML were invented as structured markup languages _for
authoring of rich text documents by humans_ , for which JSON is unsuited and
sucks just as much as XML sucks for APIs.

Edit: though XML has its place in many b2b and business-to-government data
exchanges (financial and tax reporting, medical data exchange, and many
others) where a robust and capable up-front data format specification for
complex data is required

------
zeveb
If we're going to talk about replacing XML with better data formats, why not
switch to S-expressions?

    
    
        (feed
         (version https://jsonfeed.org/version/1)
         (title "My Example Feed")
         (home-page-url https://example.org)
         (feed-url https://example.org/feed.json)
         (items
          (item (id 2)
                (content-text "This is a second item.")
                (url https://example.org/second-item))
          (item (id 1)
                (content-html "<p>Hello, world!</p>")
                (url https://example.org/initial-post))))
    

This looks much nicer IMHO than their first example:

    
    
        {
            "version": "https://jsonfeed.org/version/1",
            "title": "My Example Feed",
            "home_page_url": "https://example.org/",
            "feed_url": "https://example.org/feed.json",
            "items": [
                {
                    "id": "2",
                    "content_text": "This is a second item.",
                    "url": "https://example.org/second-item"
                },
                {
                    "id": "1",
                    "content_html": "<p>Hello, world!</p>",
                    "url": "https://example.org/initial-post"
                }
            ]
        }

~~~
hajile
Those two aren't comparable because you cannot distinguish between key:val
pairs and lists. You need dotted lists.

~~~
zeveb
Nope, one would normally write the code which parses such S-expressions such
that the first atom in each list indicates the function to use to parse the
rest of the list. So there'd be a FEED-FEED function which knows that a feed
may have version, homepage URL &c., and there'd be a FEED-ITEMS function which
expects the rest of its list to be items, a FEED-ITEMS-ITEM function which
knows about the permissible components of an item &c.

If you really want to do a hash table, you could represent it as an alist:

    
    
        (things
          (key1 val1)
          (key2 val2))
    

This all works because — whether using JSON, S-expressions or XML — ultimately
you need something which can make sense of the parsed data structure. Even
using JSON, nothing prevents a client submitting a feed with, say, a homepage
URL of {"this": "was a mistake"}; just parsing it as JSON is insufficient to
determine if it's valid. Likewise, an S-expression parser can render the
example, but it still needs to be validated. One nice advantage of the
S-expression example is that there's an obvious place to put all the
validation, and an obvious way to turn the parsed S-expression into a valid
object.

~~~
hajile
In the absolute abstract, you are correct. In the absolute abstract, you could
replace parens with significant whitespace and have zero visible syntax.

In practice, lisp adopted dotted lists 60 years ago and basically every lisp
since has used it as one way to represent an associated list. Minimal syntax
is better than zero syntax or loads of syntax.

------
Communitivity
It is worth pointing out that there is a relevant W3C Recommendation "JSON
Activity Streams", [https://www.w3.org/TR/activitystreams-
core/](https://www.w3.org/TR/activitystreams-core/) . I'm not saying JSON Feed
is worse, or better. I am saying that I think JSON Feeds adoption requires a
detailed comparison between JSONFeed and JSON Activity Streams 2.0.

~~~
smilbandit
Thanks +1, didn't know that.

------
eric_the_read
A few thoughts on the spec itself:

* In all cases (feed and items), the author field should be an array to allow for feeds with more than one author (for instance, a podcast might want to use this field for each of its hosts, or possibly even guests).

* external_url should probably be an array, too, in case you want to refer to multiple external resources about a specific topic, or in the case of a linkblog or podcast that discusses multiple topics, it could link to each subtopic.

* It might be nice if an item's ID could be enforced to a specific format, even if perhaps only within a single feed. Otherwise it's hard to know how to interpret posts with IDs like "potato", 1, null, "[http://cheez.burger/arghlebarghle"](http://cheez.burger/arghlebarghle")

~~~
smilbandit
I was thinking that all the urls and images should have been in arrays.

~~~
eric_the_read
Probably, but I think the goal there is to have something that you can display
on a summary page with a list of items or episodes, where there's just an icon
for each (and a banner image for the background or header or some such), for
which purpose I think a single image is fine (I totally get your wanting more
than one, though, and I'm happy to be wrong here).

------
jerf
I would suggest specifying titles as html, not plain text. I've seen too many
things titled "I <i>love</i> science!" over the years to believe in the idea
that titles are plain text.

Also, despite the fact this is technically not the responsibility of the spec
itself, I would _strongly_ suggest some words on the implications of the fact
that the HTML fields are indeed HTML and the wisdom of passing them through
some sort of HTML filter before displaying them.

In fact that's also part of why I suggest going ahead and letting titles
contain HTML. All HTML is going to need to be filtered anyhow, and it's OK for
clients to filter titles to a smaller valid tag list, or even filter out all
tags. Suggesting (but not mandating) a very basic list of tags for that field
might be a good compromise.

~~~
ergothus
Allowing HTML means the other side will have to validate that HTML (to avoid
XSS). Using text means you can stick in the DOM using innerText() and be much
more confident that you aren't injected XSS.

I agree that I see HTML in RSS titles, but I rather have the occasional
garbled title that the author can fix by striping out HTML before the RSS than
ensuring that every RSS reader isn't opening up new security holes.

~~~
jerf
There is no way to avoid having to handle HTML safely. There's no point in
trying to limit your exposure to that problem when the entire point of this
standard is to ship around arbitrary HTML for interfaces to display. Once
you've solved the hard problem of displaying the _body_ safely, displaying the
title is trivial. Making the title pure text does nothing useful. JSONFeed
display mechanisms that are going to get this wrong are going to do things
like leave injections in the date fields anyhow.

------
jawns
> It's at version 1, which may be the only version ever needed.

Wow. Now that's confidence. Have you ever read the first version of a spec and
thought, "That's just perfect. Any additional changes would just be a
disappointment compared with the original"?

~~~
Johnny_Brahms
MIDI 1.0 is maybe not perfect, but it is still unchanged since 1983. People
have tried to replace it for 2 decades, but failed to provide any enhancements
worth a switch.

But MIDI doesn't really fit that description since it builds on 2 years of
work by Roland. My best bet though.

------
pimlottc
> JSON is simpler to read and write, and it’s less prone to bugs.

Less prone to bugs? How's that?

~~~
bastawhiz
Consider XML entity bombs. You need to explicitly tell your XML parser not to
follow the spec to prevent malicious sources of XML from crashing your
application. XML also has a lot of room for syntax errors, with many types of
tokens and escape rules. JSON, by comparison, does not.

~~~
jjawssd
> XML also has a lot of room for syntax errors, with many types of tokens and
> escape rules. JSON, by comparison, does not.

Parsing JSON is a minefield.

Yellow and light blue boxes highlight the worst situations for applications
using the specified parser. Take a look at how a bunch of parsers perform with
various payloads:
[http://seriot.ch/json/pruned_results.png](http://seriot.ch/json/pruned_results.png)

"JSON is the de facto standard when it comes to (un)serialising and exchanging
data in web and mobile programming. But how well do you really know JSON?
We'll read the specifications and write test cases together. We'll test common
JSON libraries against our test cases. I'll show that JSON is not the easy,
idealised format as many do believe. Indeed, I did not find two libraries that
exhibit the very same behaviour. Moreover, I found that edge cases and
maliciously crafted payloads can cause bugs, crashes and denial of services,
mainly because JSON libraries rely on specifications that have evolved over
time and that left many details loosely specified or not specified at all."

More details available at:
[http://seriot.ch/parsing_json.php](http://seriot.ch/parsing_json.php)

~~~
floatboth
None of these issues are as bad as the XML ones. You generally don't need
"defusedjson" like you need
[https://pypi.python.org/pypi/defusedxml](https://pypi.python.org/pypi/defusedxml)

<!DOCTYPE external [ <!ENTITY ee SYSTEM
"file:///etc/ssh/ssh_host_ed25519_key"> ]> <root>&ee;</root>

------
ttepasse
Shortly after RSS 0.9 came out RSS 1.0 reformulated the RSS vocabulary in RDF
terms. Of course the modern (sane) successor to RDF/XML is JSON-LD.

So I'm hoping for JSON-LD Feed 1.1 and a new war of format battles. Maybe we
can even get Mark Pilgrim out of hiding!

~~~
toyg
Someone should open a social network for feed-wars veterans.

More seriously, it's sad so to see that almost 20 years later, the dream of a
decentralised and bidirectional web is in even worse shape than it was back
then.

~~~
bullen
Yes, extend this to JSON pingback and bring back the decentralized social web.

------
einrealist
If you create a new JSON-based document format, please consider to use JSON-LD
(aside raw JSON data) so we can make a true world of interconnected data
through semantic formats. At least, so I can generate code and automatically
validate format compatibility from a well-defined schema. Thank you!

EDIT: Because I get downvoted despite stating my opinion on the topic, I
adjusted the statement.

~~~
strictnein
No, please don't.

~~~
dabernathy89
Well now I don't know what to think.

------
gwu78
Is this a "JSON Feed" from NYTimes?

Example below filters out all URLs for a specific section of the paper.

    
    
       test $# = 1 ||exec echo usage: $0 section
    
       curl -o 1.json https://static01.nyt.com/services/json/sectionfronts/$1/index.jsonp
       exec sed '/\"guid\" :/!d;s/\",//;s/.*\"//' 1.json
    

I guess SpiderBytes could be used for older articles?

Personally, I think a protocol like netstrings/bencode is better than JSON
because it better respects the memory resources of the user's computer.

Every proposed protocol will have tradeoffs.

To me, RAM is sacred. I can "parse" netstrings in one pass but I have been
unable to do this with a state machine for JSON. I have to arbitrarily limit
the number of states or risk a crash. As easy as it is to exhaust a user's
available RAM with Javascript so too can this be done with JSON. Indeed they
go well together.

------
pedalpete
"JSON has become the developers’ choice for APIs", I'm curious about how
people feel about this statement from a creation vs consumption perspective.

I'm currently creating an API where I'm asking devs to post JSON rather than a
bunch of separate parameters, but I haven't seen this done in other APIs (if
you have, can you point me to a few examples?). I'm curious what others
thoughts are on this. It seems that with GraphQl, we're maybe starting to move
in this direction.

------
smilbandit
I'd like to see a language available at the item level. You can derive the
language from the http headers but if you're dealing with linkblogs it would
be nice at the item level to help with filtering.

I think that images and urls would do well as order lists rather than as
individual values. at the top level you have 3 urls and an array for hubs.
with type and url you could have an array for hubs and the urls. same could be
done for images at the top level and both again at the item level.

------
niftich
It's unfortunate that XML has fallen so out of favor that well-made, strongly-
schemad formats specified in XML, like Atom, are suffering in turn -- although
reasons for feeds' demise go well beyond its forms-on-the-wire. This trend
frustrates me, but it's undeniable that a lot of web data interchange happens
with JSON-based formats nowadays, and the benefits of network effects,
familiarity, and tooling support make JSONification worth exploring.

But even more frustrating is when a format comes out that's close to being a
faithful translation of an established format, but makes small, incompatible
changes that push the burden of faithful translation onto content authors, or
the makers of third-party libraries.

I honestly don't intend to offer harsh targeted critique against the authors
-- I assume good faith; more just voicing exasperation. There have been
similar attempts over the years -- one from Dave Winer, the creator of RSS
0.92 and RSS 2.0, called RSS.js [1], which stoked some interest at first [2];
others by devs working in isolation without seeming access to a search engine
and completely unaware of prior art; some who are just trying something
unrelated and accidentally produce something usable [3]; finally, this
question pops up from time to time on forums where people with an interest in
this subject tend to congregate [4]. Meanwhile, real standards-bodies are off
doing stuff that reframes the problem entirely [5] -- which seems out-of-touch
at first, but I'd argue provides a better approach than similar-but-not-
entirely-compatible riff on something really old.

And as a meta, "people who use JSON-based formats", as a loose aggregate, have
a serious and latent disagreement about whether data should have a schema or
even a formal spec. In the beginning when people first started using JSON
instead of XML, it was done in a schemaless way, and making sense of it was
strictly best-effort on part of the receiving party. Then a movement appeared
to bring schemas to JSON, which went against the original reason for using
JSON in the first place, and now we're stuck with the two camps playing in the
same sandbox whose views, use-cases, and goals are contradictory. This appears
to be a "classic" loose JSON format, not a strictly-schemad JSON format, not
even bothering to declare its own mediatype. This invites criticism from the
other camp, yet the authors are clearly not playing in that arena. What's the
long-term solution here?

[1]
[http://scripting.com/stories/2012/09/10/rssInJsonForReal.htm...](http://scripting.com/stories/2012/09/10/rssInJsonForReal.html)
[2]
[https://core.trac.wordpress.org/ticket/25639](https://core.trac.wordpress.org/ticket/25639)
[3]
[http://www.giantflyingsaucer.com/blog/?p=3521](http://www.giantflyingsaucer.com/blog/?p=3521)
[4] [https://groups.google.com/forum/#!topic/restful-
json/gkaZl3A...](https://groups.google.com/forum/#!topic/restful-
json/gkaZl3AtiPk) [5] [https://www.w3.org/TR/activitystreams-
core/](https://www.w3.org/TR/activitystreams-core/)

------
0x006A
why is it size_in_bytes and duration_in_seconds as opposed to content_text and
content_html

It should just be size and duration or size_bytes size_seconds (but adding
units only makes sense if you could use other units). adding _in to the mix is
strange.

------
gumby
A good announcement explains what problem it is intending to solve.

------
bullen
I miss the distributed social pingback days!

Implemented:
[http://sprout.rupy.se/feed?json](http://sprout.rupy.se/feed?json)

------
voidfiles
This seems like a great idea. If it can help even one developer it's worth it.

~~~
CharlesW
How would it help even one developer?

Or asked another way, what problem does this solve for you?

~~~
voidfiles
So, my personal blog doesn't get a ton of traffic, but the one article that
gets the most traffic is an article about how to monkeypatch feedparser to not
strip about embedded videos.

While not hard evidence, I think it's indicative of the kind of experience a
developer has when they choose to engage with syndication.

------
cocktailpeanuts
Doesn't Wordpress already have something like this? [http://v2.wp-
api.org/](http://v2.wp-api.org/)

I don't understand why suddenly people treat this like something that uniquely
solves a problem. Maybe I'm missing something?

~~~
yoz-y
This format is more akin to RSS than to a programmatic rest API. The main goal
is to be able to avoid the pitfalls of parsing Atom and RSS feeds. Both Brent
Simmons and Manton Reece are quite active in making decentralized alternatives
for self publishing for which RSS is the current backbone.

~~~
donohoe
Parsing RSS and Atom feeds is a solved problem, no?

~~~
yoz-y
JSON Feed is a new solution for the problem already solved by RSS or Atom. It
makes it easier to develop new publishers and consumers. It also tackles the
main problems with these two formats, e.g.: no realtime subscriptions,
mandatory titles which are a pain for microblogs, potential security problems
with XML and so on.

Like somebody somewhere has written: If no one had ever reinvented the real
wheel - our cars would be rolling around on big wooden logs

------
ozten
XML is aweful, but it does have CDATA, which lets you embed blog posts
directly and it's easy to debug.

String encoded blog posts are going to be painful once people start using the
`content_html` part of the spec.

~~~
__david__
Naw, JSON has reasonable quoting in the strings. It's maybe painful to read
the raw json, but it encodes just fine.

------
pswenson
i'm surprised no one has started a snake vs camel case debate here!
[https://jsonfeed.org/version/1](https://jsonfeed.org/version/1)

------
nilved
Good lord, Web people, stop it. You are embarrassing yourselves. We already
have standards and you need to stop recreating everything in JavaScript.

~~~
frou_dh
Brent Simmons is hardly some webdev kid barging it. He was the original
developer of NetNewsWire, a very popular/influential feed reader application
which is now 15(!) years old.

~~~
smilbandit
thanks, knew I recognized the name

------
systematical
Who uses feeds? Who uses XML?

------
ehosca
stopped reading after "JSON is simpler to read and write, and it’s less prone
to bugs." ....

------
donohoe
I have grave concerns that this publishing format is delivered to us by two
people that, as far as I can see, have limited to zero publishing background.

That said, they're being responsive to questions in Issues, so I remain
optimistic.

~~~
acdha
Learning about the history of RSS should alleviate your concerns. Brent
Simmons has been working in the space for 15 years, writing one of the more
popular clients and working at a company which provided sync and syndication
services:

[https://en.wikipedia.org/wiki/NetNewsWire#History](https://en.wikipedia.org/wiki/NetNewsWire#History)

