For those like me who needed a little help... I do not fully understand the browser rendering process (someone who does please chime in), but what I gather about how this is works is:
- the content-type of the page is "text/html", so the browser is trying to render html
- there is no special meaning of the #render key to the browser (again the browser doesn't know this is json)
- browsers are very fault tolerant so they'll just skip over your document till they find pieces of html
- the JS made by the author is the part that parses the json as json, and it uses the #render key as its metadata section
You can try opening a file like this to test for yourself to get a sense of just the browser-parsing part, test.html:
I do however get this warning in FireFox so perhaps this is pretty fragile:
> The character encoding declaration of the HTML document was not found when prescanning the first 1024 bytes of the file. When viewed in a differently-configured browser, this page will reload automatically. The encoding declaration needs to be moved to be within the first 1024 bytes of the file.
I'm guessing differently-configured means not in quirks mode? Does quirks mode just make the browser extra fault tolerant?
> I'm creating a blog platform using this concept.
Yeah, please don’t; this is an abomination that’s fun for demonstrating and teaching how these things work, but should absolutely never be used in reality.
You could use this as your data model foundation upon which to build a generator, but you should never under any circumstances actually serve this stuff.
This is such a depressing comment. It's the opposite of the hacker spirit. The web has made it easier than ever to discourage people from trying anything new.
New ideas seem like bad ideas, because otherwise people would be doing them. One could imagine your objection applying to React: "What a horrible idea. It's a nice hack, but under no circumstances should you actually build webpages like this. Webpages are built out of HTML and valid JavaScript, which this is not..."
This is just a bad idea. Apart from abusing browser rendering quirks (which is unfortunately a sad tradition of the web, something that has claimed the life of entire technology stacks like xml/xhtml), it removes any hope of producing webpages accessible from screen readers and similar. In comparison, JS-only pages look like model citizens. This is the equivalent of abusing IE hot-comments and other CSS-parsing bugs to style your website, but worse (at least that technique was useful).
If JS generated sites can support ARIA then I don't really see why this couldn't do the same thing since it's just a JS site with a funky initial payload.
The "fun" of this site is that it's its own API. Sure there are better ways to accomplish this but abusing quirks for fun and profit is the hacker spirit.
Hacking is mostly playing around and using things in ways the designer didn't imagine at the time.
This is a good hack. It brings a smile to the face and warms the heart (if a good hack is what you were looking for). And I think that was the point anyway, not to come up with a professional architectural pattern.
So long as the JavaScript executes, this doesn’t actually harm accessibility: as it loads, it slurps the JSON, and turns it into a perfectly normal web page; rather like XSLT, as others have pointed out. And quirks mode isn’t that serious a problem. It’s a mild nuisance at most, really.
I disagree with this removing any hope of accessibility. If anything it brings hope for websites to be more accessible than ever before. Imagine a screen reader that instantly and flawlessly knows which content to read and how to navigate through it? If a consistent JSON format is adopted and used this would be a reality.
You’re misunderstanding the purpose of this JSON/HTML combination, and I get the impression you’re probably not familiar with how screen readers work, either. The JSON is purely transient, being projected to the HTML DOM. The JSON has no standard semantics; the idea is that it should be whatever shape makes sense for your use case, and then that you should use JavaScript to project it to HTML. Think of it as a server-side templating language that takes a bag of data and decides how to write the HTML, except that the document is the data, plus an extra piece that embeds the template to apply to the data.
The web is pretty much best-in-class for accessibility matters. (There are a few isolated cases where native desktop or mobile apps can do better, mostly to do with efficiency.) HTML elements have defined semantics, so that things like headings and links are automatically navigable, and sections, headers, footers and navigation lists become waypoints. Then ARIA attributes can be used to provide any further metadata necessary, such as to mark up a tabs widget to show how to interact with it. And that’s still key—accessibility needs to care about interactions (which tab is open? and did the content available change?), so state matters. Thus, accessibility tools will never care about any format that you are projecting from, like this JSON; they must only care about what is materialised, which is the HTML DOM. (Besides all that, the only sort of “consistent JSON format” that you could have would be basically an encoding of the HTML, which would be verbose and subjectively ugly compared to the HTML serialisation, e.g. ["a", {"href": "/"}, ["Home"]] or {"tagName": "a", "href": "/", "children": ["Home"]} instead of <a href="/">Home</a>, and miss the whole point here that the JSON is representing data rather than what the user sees.)
If you’re not familiar with accessibility stuff, I heartily recommend looking into it. If you can, find a blind person and see if you can watch them using a computer or phone. It’s really fascinating (I’ve never seen anyone be bored by it) and super useful if you ever contribute to making just about anything on a computer. Even people making documents in a word processor can learn things like “use actual headings rather than just making the text bigger and bold, because the semantics are useful”.
I think the difference is that nothing in React is abusing fault tolerance in a browser to the extent that this is. This very specifically works only because browsers don't generally care that an HTML page is anything close to valid HTML. There's no guarantee at all that this will continue to be the case, and it's resonable to assume that some browser in the future might decide to do render things that are very clearly a JSON document as a JSON document instead - fundamentally breaking the core architecture of your system.
React is a different approach, but it plays by the rules and doesn't serve invalid HTML or Javascript.
In fact, at least Firefox already has support for displaying JSON files. I assume this does not happen here because eventually, there actually is an <html> tag.
The primary reason for calling it an abomination is that if they actually plan on having this used (especially if by other people) it will spectacularly break the moment Chrome decides to make their quirks mode more strict.
It's a fun hack, although not exactly very unique. See also the very old by now "Website in a PNG file" concept, which does the exact same thing: https://gist.github.com/gasman/2560551
This will never break. I still think it's a bad idea, but not for that reason. Browser vendors are strongly against ever making backwards incompatible changes - especially a change like this that would likely affect thousands of websites. Chrome is not going to suddenly make their HTML rendering engine more strict.
React and its ilk were designed for use in apps, where there are meaningful advantages in powering things entirely with client-side scripting rather than generating server-side HTML and possibly enhancing it on the client side with scripting.
They were then abused by increasingly many people for rendering static content, things like blogs.
These people were using the wrong tool for the job, and it has had a detrimental effect on the web.
Now, pages are regularly far heavier than before with expensive client-side code to do stuff that should have been done server-side in almost all cases, and it became popular enough that search engines eventually had to cave and introduce a full JavaScript execution environment in their indexers, tooling everywhere got a lot more complicated, and the last state of the web was worse than the first.
There is a place for things like React: in rich apps, and perhaps even for server-side rendering, though I’m not fond of that for things like blogs because it encourages you to end up depending on it on the client side too.
But client-side JavaScript app frameworks made some formerly-impossible things possible, and formerly-complex-and-unmaintainable things tractable.
This monstrosity, on the other hand, offers no actual benefits for the user, and does introduce a few new problems (quirks mode for styling, and an unnecessary dependency on JavaScript). And so I say it should never be exposed to the client side. Use it as an input format for your blog generator if you like, but don’t try shipping a wonky JSON/HTML polyglot directly.
Look. We're on a Show HN. You call his work a monstrosity with no benefits for the user. You call it an abomination. When you've used up all your insults, what's left? What are you going to call something deserving of hate? It's so petty and bitter, and you need to go have fun with something. Go play!
As for your React commentary, it's 4:42am my friend, and I was just sad to see someone take such a hot steamy dump on someone's work on a Show HN thread without a single other person standing up for them. But all of your points about React can be summed up as "well, yes, that's what happens when something is successful: history is rewritten to make it seem like it had a place from the beginning."
If someone was like "Show HN: React - a new way to write websites," it feels like a guarantee you'd be right there like "But it breaks when you turn off Javascript!" Meanwhile, even Tor admitted defeat long ago and enabled JS by default.
I’m specifically objecting because of the text “I'm creating a blog platform using this concept” and what else the author is saying here. As a technical demonstration, it’s fine—various people don’t realise you can do things like this, and I’m all in favour of showing people these sorts of things, as it does get people thinking in interesting ways. But the author seems to think that this is a good idea, which I’m afraid it’s just not, so I can’t mince words—though perhaps I have expressed myself a bit more strongly than is seemly (thank you for pulling me up on that).
Of React (and its ilk: React was by no means the first project along these lines; as an example, I can think of having hit a couple of full JS-required Knockout sites well before React was a thing, where they would have been better as prerendered HTML), I’m not saying that history was rewritten to say it had a place from the beginning, but rather that there was a place for it from the beginning: that there is a certain type of app where there are very substantial benefits for the user in doing at least some parts on the client side (that was where the jQuery style of progressive enhancement started, and then things like Backbone steadily expanded it), and then that architecturally there are substantial benefits to going all in on client-side rendering if you need this sort of enhancement (this was what Knockout tended towards, and what ExtJS and React more fully realised)—but that this had costs, too, in that it broke the traditional model, making life harder for all kinds of tooling and making pages heavier, so that it shouldn’t be used everywhere.
React was not initially intended as a way to write web sites, but rather web apps. It’s an important distinction. For apps like Facebook and Twitter, the advantages of server-side rendering were not so applicable, and the benefits of full client-side rendering more marked. Unfortunately, the SPA craze grew further, and people liked using one tool everywhere, and so it became more and more common to use React in places where it was inappropriate at the time; until finally tooling like search engines caved on the whole JavaScript thing.
I still don’t like how often I find normal websites depending on JavaScript for fundamental rendering (it may not surprise you to discover that I browse with JavaScript disabled by default—mostly for performance and minimisation of annoyances), but at least React has purpose and some advantages, and has done from the start.
Meanwhile, this thing here doesn’t get you any of the benefits of client-side rendering (things like lighter and faster subsequent page loads and transitions, by using AJAX and semantic knowledge), but does carry all of the costs of client-side rendering: poorer performance, and making life harder for tooling of all kinds.
I agree with you. People should just explore the possibilities that exist to the fullest, harmless extent. There's no harm in this. Along the way, they might find something entirely new. Blocking off that path with "don't ever do this" is just closed-minded.
It's a good thing people exist that aren't deterred by gatekeeping comments like these and actually try to innovate or simply play around and have fun with programming.
But what do you think one gains by serving json instead of serving valid html like a body with just `<data src="actual.json"></data>`?
If you can retain full functionality (and hack on your ideas) AND be standards compliant (to make sure someone who decides to start offering a new web browser doesn't have to worry about 10% of websites serving this instead of valid HTML), then you should do that.
You're both right. Suggesting other people try your blog built on this platform is giving people bad advice, and the comment you're replying to is overly negative.
I think the thing that's annoying is trying to sell other people on using such a tool. Anyone crazy enough to try it should jump right in, but don't try to talk people into it as an actual good blog option.
I call this sort of project "linux on a wristwatch." It's totally cool and a fun hack, but there's no real utility to it beyond an art piece.
Moreover this is not even a "plain JSON". While I don't think the OP doesn't make this exact claim, an arbitrary JSON with the `#render` bootstrap won't necessarily work because the JSON can contain valid HTML tags... Interesting as it stands, utterly useless in practice.
Can’t you put the render at the top and the script can block before anything else to make sure a) it always executes and b) removes all the json content from the page.
Or actually, you can probably just put some closing angles before the start of your render to make sure you don’t get broken from above. Little sketchy though.
If the script were at the top, it would run before any of the rest of the JSON gets inserted into the DOM, so it would (1) not have access to it (unless it sets up an async callback to do it) and (2) not be able to remove it from the DOM (until that callback runs).
The script might be able to set up mutation observers that notice stuff being added to the DOM and immediately remove it into a buffer the script maintains. This might actually be pretty viable.
> Or actually, you can probably just put some closing angles before the start of your render to make sure you don’t get broken from above.
That won't help with the fact that the data will actually be "corrupted" by the HTML parser. This is why all the HTML inside the JSON in the example is HTML-encoded.
One plausible fix for _that_ is to have a <plaintext> tag right after your <script>. So put the #render as the first thing in the JSON, set up mutation observers in the script, <plaintext> to prevent HTML-parsing of the rest of the doc, and this might be pretty robust to random HTML bits in the JSON data.
> One plausible fix for _that_ is to have a <plaintext> tag right after your <script>.
That seems more plausible since `<plaintext>` can't be closed. In fact mutation observers can be used to get and process the partially downloaded JSON before onload (I'm pretty sure this is possible but haven't tested, YMMV). That would be still horrible as a general solution, but might be actually an interesting solution for more limited situations.
It's kind of the author's problem and not ours. If they want to figure out how to make a quirks-mode page work correctly, be my guest. It will just be unnecessarily painful.
Quirks mode is not the only problem of this approach.
It may not significantly affect users of modern browsers in their default configuration with good internet connections, but it does affect plenty of other things.
You want to parse things in the document? Now you need a whole different suite of tools from the usual tools you use. Your library that parses all the meta tags, identifies the content, &c. is now useless. Now you need either a full JavaScript execution environment, or a JSON parser instead of an HTML parser (and that JSON has completely lost the semantics that HTML provides, so you can’t query things like “document title” or “meta description”).
You have JavaScript disabled? Here, have a mess that, well, it’s better than most client-side rendering things in that the content is still probably there, rather than the screen just being blank, but there are reasons why you should always prefer server-side rendering for things like blogs.
You have a slow or unreliable internet connection? Now the page is taking longer to load, and until the JavaScript loads, the page is empty—and it may fail to load.
I object to people doing things like this as more than a fun technical demonstration because it does harm some users.
> You want to parse things in the document? Now you need a whole different suite of tools from the usual tools you use. Your library that parses all the meta tags, identifies the content, &c. is now useless.
They're no less useless than for SPA's that render their content in JS. That's all this site is. If your web scraping suite can't handle content loaded from JS then you're already locked out of most of the web.
Same if you disable JS. Most of the web will be broken for you and it takes an annoying amount of effort enable the minimum necessary scripts.
For just general web pages (as distinct from apps), it’s not common to actually require JavaScript. I know this because I’ve JavaScript disabled by default for the last two or three years, and it’s really not all that much of a bother.
The key thing here is that SPAs normally get some kind of interactivity benefits from being written in that style (though I confess they break things that the platform provides, by reimplementing them badly, at least as often), such as loading same-site links faster. But this thing doesn’t do that; it’s purely a projection, like XSLT. It should be done as part of a generator or server, rather than on the client side.
Maybe browsers should artifically delay the rendering of pages in the quirks-mode to create incentive for website creators to use proper formats/techniques.
I can imagine the author had their fun, I also had fun reading the article, but I don't think this is leading to an accessible internet.
> Maybe browsers should artifically delay the rendering of pages in the quirks-mode to create incentive for website creators to use proper formats/techniques.
Interesting idea, but Google has a dominant position in both the browser space and the search-engine space. They can punish such pages in the Google search rankings, and this avoids the obvious retort of Why are you deliberately making my browser worse?
> Yeah, please don’t; this is an abomination that’s fun for demonstrating and teaching how these things work, but should absolutely never be used in reality.
Would you say the same about taking a browser -- which was designed to be a document viewer for researchers -- and turning it into an entire application execution platform like we have now?
Point is: It's silly to say things like this, because this is how innovation happens.
You know, if we could just go back and make that un-happen, we probably would have built an actual internet application platform. That probably would have been a lot better than the layers of hackery that we deal with now, but ok, that's pure speculation.
Perhaps, but that doesn't mean it would be better.
Look at how many over-engineered platforms/frameworks Microsoft made, and it just ended up being over-complicated or reached a point where it was no longer worth continuing.
No, please read my other comments here—the point is that this thing doesn’t solve anything, but introduces various problems.
The only tenuous claim it can have is that the document is JSON, so if you want to parse the data, maybe you’ll find it easier? But in practice this is not useful: you can already embed or link to a JSON representation in the HTML, and that JSON representation then won’t be constrained by having to embed the renderer, either.
It solves the problem of having to use multiple languages across front/back-end by using Javascript/JSON for everything. It may seem as ridiculous as using Javascript to write back-end server-side and desktop software, yet along came Node and industry hype and here we are today doing exactly that.
About the blog platform, I honestly thought the author was being sarcastic or joking when they out it there.
It is not even vid JSON response to begin with (it's served with text/html), and if the fetcher is configured to ignore that, you might as well configure the origin server to serve the JSON response based on the accept header.
Well, not much worse than all the other fancy JS ways to build a page that everyone and his mother are using today. They are all headache inducingly bad.
That last line has varying degrees of invisibility in different markdown viewers I looked at.
Obviously there's optimization that could be had here but this is equally hacky IMO and simpler since you can just write MD instead of JSON. You lose a couple of key features though -- templated components for example. However, I imagine you could shoehorn those in without much effort.
This has the advantage that without JS enabled you'll get poorly formatted markdown in the browser.
It's a valid JSON document that's basically unrelated to the HTML that ends up being displayed, so it's not exactly equivalent to the submitted link.
What the submitted link achieves is impossible without JS, which is also why it's a cute hack and should only be used in production with the knowledge that it's not going to work well for clients without JS.
In regards to the character encoding issue, I wonder if they could just move the #render to the top of the object by inserting it as the first property.
I'm guessing there might be issues with the streaming nature of html parsers then. If you put it at the top and the JSON is of any non-trivial size you might end up with the script trying to read invalid JSON since it's not fully loaded yet.
The script could be tied into the global onload event, so that it will only attempt to parse the JSON after the page has been fully loaded. Likely the HTML block is just placed at the bottom of the JSON for aesthetic reasons.
I don't see this error on FF. Are you seeing this on the console?
I put the meta tag to render some special characters without relying on the server, but HTTP header would be a better option (Content-Type: text/html; charset=utf-8).
Hey! Yes, however now I notice that this is only when running locally. I took your page, replaced script and css with absolute links, and loaded locally as file://
The reason it works locally is because charset detection works different when loading from a local file than when loading from a webserver.
In particular browsers tend to be a lot more enthusiastic about treating local files as unicode than they are for responses from web servers.
The reason the meta charset doesn't pick up is you're missing quotes around `utf-8` (also it may be past the 1024 byte mark, hard to say without running wc).
The reason browsers' charset autodetection works differently locally is likely because HTTP standard specifies Latin1 (ISO-8859-1) as the default encoding (at least in 1.1).
Or server sends explicit charset in HTTP header (`Content-Type: text/html; charset=UTF-8`), so meta element override is not needed at all. Naturally, there is no such header present in file:///.
For the third one, there's no "skipping over" involved. The <html> and <body> opening tags are optional in HTML. The browser sees some non-whitespace text (the opening '{') not in the context of any other tag, auto-opens those tags, and the text starts being parsed as body text.
If the page loaded slowly enough, over multiple packets that arrived separated in time, you would see the JSON text coming in and rendering as text, with all the curlies and all, until the browser gets to the <html hidden> part of the JSON. At that point, some common-error fixup dating back to the Netscape/IE 3 days kicks in: when you see an <html> or <body> tag and one has already been opened (due to some stray content at the beginning of the file, likely), you don't open a new one, but copy the attributes to the existing one. This copies the "hidden" attribute to the <html>, which hides it. After that either the script executes and does its work, per your fourth bullet point, or script is disabled and the style rule inside <noscript> unhides the <html> element, but at that point you will of course just see the JSON text, parsed as HTML.
> I do however get this warning in FireFox so perhaps this is pretty fragile:
I'm not sure whether the site changed since then, but now it's sending `charset=utf-8` in the content-type header, so there's no meta prescan at all. Which is what allows the emoji after "Need Help?" to be decoded correctly.
Were you seeing the meta prescan warning on your local test file, or on the site itself?
> I'm guessing differently-configured means not in quirks mode?
No, it means things like preferences about how to handle pages without encoding declarations via various heuristics (e.g. scanning byte value frequencies and guessing what character encoding is in use based on that).
Quirks mode is determined solely by the doctype of the page. This page has no doctype (since it starts with a '{'), so it's always in quirks mode. And what quirks mode does is enable a set of behaviors designed to not break content written more or less in the "before HTML 4.0" era. https://quirks.spec.whatwg.org/ should have a more or less exhaustive list of quirks mode behaviors browsers are expected to implement. Some of these are extra-fault-tolerance (e.g. the hashless hex color and unitless length quirks), while some are just replicating pre-CSS browser rendering behavior (like the line height calculation quirk, which a bunch of sliced-image stuff relies/relied on).
In Firefox it renders in "quirks mode" because it can't find a standards compliant page. There used to be downsides to the page rendering in quirks mode instead of standards compliant mode, but I'm not sure any more.
> The JSON must contain only pure information without any concern about design or markup.
Wellll.. I mean the json has markdown syntax in it. That's a lot nicer than html tags for markup, but it's still markup.
In case anyone reading hasn't seen it before, browsers have a thing called XSLT built into them that does something similar for XML documents. You serve up your data as XML, add a tag that points to an XSLT document, and the browser will use the transform on your document automatically. If the resulting output is renderable XHTML, it'll display it like a regular webpage.
That being said, I wouldn't recommend doing that, since XSLT is a programming language whose syntax is XML itself. Apparently web standards folks in the early 2000s thought the future was XML all the way down.
It's very cool, but also, very strange, if you come from a procedural language background.
Also, XSLT is really given power by being mixed with XPath (which is procedural).
I suspect that if I had known more about FP back when I learned it, I would have had an easier time. I wrote up a really long, painful post about XSLT, way back in the early 'oughts.
I learned XSLT twice. The first in the very early 2000s when I was creating a large and mostly static website for a magazine with a lot of content, and mostly just hacked it together. It was ok but mostly I was using it as a simple templating language. The second time was about five years ago, as a module in my MSc, by which time I was well-versed in functional programming — this time it was easy to do all sorts of weird and wonderful stuff, including a TAP-outputting assertion library for templates written in XSLT itself and a library for doing depth-first searches over graphs of nodes.
XSLT is mainly used to transform XML into XML. The resulting document doesn’t have to be XML, though. I remember being assigned a task of parsing and importing a large XML document into a MySQL database. This was supposed to be done on PHP. I kind of felt repulsed by the idea of writing the parser in PHP (it would have been horrible). So, I used XSLT to transform the document into CSV and then just imported it using LOAD DATA. Worked like a charm.
In the beginning of my career, about 15 years ago, I had a job working with a large auto parts retailer and they had all of their products in these massive XML files. We used XSLT to transform those into all kinds of things, including physical printed catalogues that would go in their stores. Obviously the XSLT didn't actually print catalogues, but we did turn them into some format that I forget now (maybe PDF!) that we could just send straight to the printers.
It was pain and suffering developing these style sheets, but once done it worked like a charm.
That's nice. I was in the same line of work. Started from building our own XML parser, when there were none available yet. Later I developed this fun XSLT that generated a fullblown Forms UI based on an XML and/or XML Schema which we used in a custom CMS. Then with XSL-FO and Quark XPress we generated user manuals and product catalogues from that. The XSLT was surprisingly fast for the time, and given the transformations it needed to do.
> Obviously the XSLT didn't actually print catalogues, but we did turn them into some format that I forget now (maybe PDF!) that we could just send straight to the printers.
One of the big problems with XSLT, is that XSLT only started to become useful after Version 2.0, and wasn’t supported by languages at that level. This was the main reason I stopped using it.
2.0 was (maybe still is?) supported by Saxon, a proprietary library, needing to be licensed. Built-in support was capped at 1.5.
If I remember, in order to run XSLT 2.0, you had to license and install Saxon on your server (i believe it was a Java application, meaning the server need to have a JVM), then call it, using system calls. Awkward, at best.
But this was many moons ago. I suspect/hope that things have improved, since then.
If you are coming from FP, XQuery is arguably a better fit. You lose template matching, but the syntax is much more concise (although still verbose by PL standards).
Yeah, I thought about using XML/XSLT too, but I think JSON would be more aligned with today's tools. Also, the render could be written in React or Vue, and add all the functionalities of a modern web app, for example, not just HTML.
It depends on which tools you're thinking about. When it comes to web browsers, they can process XML+XSLT (to produce and render HTML output) out of the box. On the other hand, it looks like the hack you're using here to make the JS renderer auto-load is not really valid HTML (as a whole)?
I love XSLT in theory. It had the daring to say that the most frequently programmed thing that we do is just translate data structures of one shape into data structures of a slightly different shape, and it said “That is what I shall focus on, doing just that.” And the way that it produced that, by “just start writing your output document and insert XSLT when you finally need to loop or query the input” seemed quite nice too, a sort of PHP dialect which didn't have to “escape” out of the document type and resort to echoing raw text.
There is a pure functional programming language waiting to be resurrected from the ashes of XSLT, following Haskell's original idea that a program is a pure function from a stream of inputs to a stream of commands. I just hope that someone removes the XML from it, to save on the pain of my pinkies mashing those greater-than and less-than signs. I like the abstract idea that XML is recursively embeddable, and even the radical suggestion that maybe XSLT should be involved in metaprogramming its own syntax trees, but I am not sure that this is worth it. Maybe if we blur the lines between our code editor and a generic UI, someday?
The basic problem with XSLT is that it is strictly inferior to equipping any general-purpose programming language with an XPath implementation and just going to town. Let alone giving that programming language an even half-thought-out library designed to help with such a task.
Even Go, which is really not amenable to any sort of in-language DSL or syntactic sugar, is nicer to use than XSLT itself.
XSLT locks you in a trunk without enough tools to do what you need. I remember using a variant of XSLT Microsoft put out back in the day that let you embed Javascript into it... but then I noticed, why not just do it in Javascript? So I did. And it was soooo much better.
There are good ideas in XSLT, but they are so buried in a cascade of bad decisions and limitations and restictions and missing functionality and bizarre ways of doing things (and I mean, I speak Haskell, pure FP doesn't scare me, and they're still bizarre) that the best remedy is just to start over.
I'm quite fond of DataWeave for this, as an FP-inspired data transformation language with plenty of built-ins, including support for XML, JSON and beyond [0].
Apparently there's an online playground [1] and Beta support for LSP now [2].
I've only ever used it in a containerized-Java setting.
HTML templating with XSLT may be used to save data, too.
For example, if you build a social network profile page using XSLT, alice.xml and bob.xml can reference the same profile.xsl stylesheet which converts the XML profile data to the rendered HTML page. Since the profile.xsl stylesheet can be cached, whenever a new profile is visited, only the profile data in XML needs to be transferred.
Note that the templating can also be used to save data when rendering a single page. Imagine a Twitter clone that displays a list of tweets. If, hypothetically, 80 characters of Tweet data plus 500 characters of XSL template, produce 250 characters of HTML markup, you only need 3 tweets to start saving data (250x3=750 > 500+3x80=740).
> Apparently web standards folks in the early 2000s thought the future was XML all the way down.
It should have been, but just like the masses rejected LISP for its parens, the masses rejected XML for the closing tag. We could have avoided PHP and the zoo of MVC frameworks.
If only XML wasn't so damn complicated, it might have had a chance. People love to point at close tags, but there's a long list of more serious problems.
External entities, namespaces within namespaces, CDATA, namespaces that confusingly look exactly like URLs, but aren't, parser vulnerabilities.
The way XML does namespaces is one of the best things about it - you can host any XML dialect within another, with no name clashes, and without losing the schema identity of embedded parts. Infinite data composability, subject only to the constraints placed by schema authors. And namespaces are URIs, so they can be URLs.
The bad smell was mostly coming from all the things inherited from SGML - CDATA, entities etc. Although I would also add that separation into elements and attributes is not a particularly convenient way to model many things; it mixes up differences that should really be orthogonal (ordered/unordered, atomic/composite etc).
All of the things you mentioned serve a purpose that, at scale, would have to be replicated in some ad-hoc manner in JSON.
Yes XML is more expensive to parse, but so is all the extra stuff you have to do to get around the various limitations of JSON, which could have been done cheaply by an optimized XML parser.
> We could have avoided PHP and the zoo of MVC frameworks.
I doubt that. Javascript frameworks have their value inside a corporation - they enable replaceable cog worker programming. The result sucks, but hiring and replacing React programmers is easier.
I tried programming in XSLT and it was not fun. I'm sure if it took off we'd have the equivalent of Clojure that compiled down to it, but by itself it is not a hacker's language, if you know what I mean.
Programming languages are addressed to humans, XML is addressed to machine.
That is the difference.
BTW the problem with lisp is not the parens, it is that it is too powerful for its own good. C++ and Java got there popularity from there restrictions really.
> browsers have a thing called XSLT built into them
I don't think they actually do anymore, not all of them for sure.
> web standards folks in the early 2000s thought the future was XML all the way down
Yes. What fools, to think that machine-produced content transmitted from machine to machine and displayed throw other machines would be better handled and more accessible if formatted in machine-friendly ways that were still readable to humans...
But it wasn't to be, not just because XSLT was somewhat botched (way too hard, way too many enterprise-looking warts...) and probably insecure, but because people are too lazy to produce readable markup (who wouldn't get tired of reading <prst><artdd><!CDATA[[ all day long...) and to make sure tags appear only where they should and get closed properly. Add the inevitable touch-of-death of enterprise vendors ("let's use xml in such a way you'll only be able to deal with it with our tools!"), and the game was up.
Working for a government agency that had bought into MarkLogic in 2004, we built an entire analytical site that used XML from the document server, queried via XPath, and rendered it into web pages via XSLT. The web page was defined entirely in the XSLT transform. Worked great and if you were already righting your data queries in XPath, writing the UI in XSLT kinda made sense.
> Wellll.. I mean the json has markdown syntax in it. That's a lot nicer than html tags for markup, but it's still markup.
Isn't markdown literally just HTML shortcuts for a limited set of common markup cases? IIRC, it's not uncommon to have to insert HTML tagging into markdown to get certain things to work.
Markdown compiles to html, and if I had to guess, it's probably a superset of html? (Maybe, possibly not all html is valid markdown, I haven't checked)
Assuming it's a superset though, the part I was talking about above was the (markdown - html) part, where markdown adds less noisy syntax for bolding, headings, lists etc.
Way back when, the WoW armory (web-based character, guild, etc lookup) used to use this. It was incredibly useful for third-party integration - who needs scraping when they serve you the data on a silver platter.
Yes yes, sperate content and markup/style. But you can already do this with HTML?
The only advantage I see is that you can actually support markdown like you do in your page, since it's less verbose than HTML-tags and doesn't make the text unreadable if you read it as plain text.
But with HTML5 you can introduce arbitrary tag names to pretty much get the same with an XML-like structure instead of json. Just use <subtitle> <what> <description> and supply CSS for it. If you want to consume it with something else, swap out the JSON parser for an XML one, navigate to the body tag and from there on it's the same thing.
I mean it's a cool trick you came up with, but it doesn't seem worth the effort, and relying on quirks mode seems brittle.
I would love an internet that provides content as JSON in a consistent format. No longer are we restricted to the often awful UI and UX choices and unnecessary stylistic overhauls that make everything more difficult to access. No more pop up banners. No scroll jacking. It sounds like bliss.
Fun fact: for servers that are not too particular about the Content-Type they receive or the presence of extra object attributes, it is also possible to submit (fixed) well-formed JSON using HTML forms with no JavaScript and a dash of hackery.
The important bit is to include a "dummy" key at the end of the JSON object, and an input value that closes the quotes and any open curl braces. That way the "=" character sent in the encoding of the form elements doesn't interfere with the meaningful JSON content.
There might be a clever way to get it to submit dynamic JSON that changes based on user input without JavaScript, but I haven't thought enough about it.
This technique is sometimes useful for CSRF attacks.
If I understand what you're asking: this trick is useful for submitting data to API endpoints that expect the entire request to be well-formed JSON, rather than just a small part. The (pretty-printed) POST body from the example form in my previous comment will look like a regular JSON request as far as the destination server is concerned, with the addition of a "dummy" key:
If the JSON is just a hidden form value as you suggest, the request as a whole will not be treated as JSON data. Then invalid characters will (usually) be added to the request body by the browser, and the server will (probably) be unable to parse it, causing the request to fail. This is due to how forms are encoded for POST requests.
On the other hand, if you're wondering why anyone would ever do this, then I do not have a good answer for you :)
You know, I once had a resume powered by JSON. It was rendered to a PDF (and HTML) but the syntax was backed by JSON. This was years back and I shudder to even recall the idea.
I didn't create the project mind you, it was some NPM package back when I was a fresh 'un who didn't have thoughts on using a billion subdependencies
Anyway, I was nearly broke and when I went to apply for the benefit, I was asked for my resume as a word doc
Clearly, you can imagine how this conversation went down but I had brought a PDF on a USB which I offered to print out instead.
The clerk refused to let me plug in my USB for fear I was going to "hack" her and the HTML page, saving as a PDF with Chrome, took some convincing to ask her to navigate to so it could be printed
I guess the lesson here is that if you expect to run out of funds, make sure you have your most essential documents stored via Microsoft Word?
I have my resume defined using this schema, which is handy because it's supported enough that every few years when I actually need to update it, I can find a website (such as https://resumake.io) where I can just copy in the JSON and get a nicely formatted PDF out of it.
I’ve generally saved my HTML resume to PDF without any major issues as a professional SWE for the past ~15 years or so, although I’ve been dealing primarily with SV tech companies and not government agencies in the UK.
Microsoft Word is fine too but I would still use something like PDF for export and sharing (still not sure how Microsoft Word solves the usb problem for you).
Generally though I like something text-like that I can revision control more easily/edit in vim. Maybe I’ll switch to something like pan doc going forward (then you can generate word if they really want it).
The real head scratcher is that after all that, they said "Ah, you have rent of $160/week? We're glad to offer you $110. Perhaps I misunderstood but paying rent is very much a binary situation so I wondered what was even the point of such a system
My resume is an XML document and a stylesheet I feed to Prince (https://www.princexml.com/), which is an amazing piece of software. I also used it once to generate a book I had printed by some on-demand printing company. I got great output out of it.
Isn't Reddit like this, in a way? You can append ".json" at the end and get the data for a page. Others can build their own UI on top of this, if they don't prefer the first party representation.
The same could be achieved by having the server return different views of the data for different HTTP Accept headers, although not as convenient as appending a file extension. I once did this to interpolate REST API responses into dedicated HTML templates if requested with Accept: text/html, which made exploring the API in the browser more enjoyable.
If you go back to the web as a bunch of file servers, where every URL ended in the extension type of the document you wanted, this is what you get. Then MVC frameworks said "hey, if the URL represents some semantic aspect of the View, we can show off the power of MVC by letting users swap out the view by changing the extension!" And voila, some.com/url.html for HTML and some.com/url.xml for XML and, later, some.com/url.json for JSON
Most of the big MVC frameworks offered this out of the box at some point, which made life easier before dedicated RESTful APIs became a thing.
It's a fun hack! The website could make it clearer that it only works because browsers tend to be very fault tolerant, and the JSON itself is by no means a valid "web page" (i.e. there are no guarantees it'll render properly).
You may also be entertained by my http://github.com/shadowcat-mst/App-plx/ project which has a script that produces a documentation page that's both valid HTML and a valid perl bootstrap installer for 'curl | perl -' purposes.
An example of the output is at http://trout.me.uk/perl/plx.html (that one's actually served over http, it's a demo, please don't blindly run it).
The HTML is just plain pod2html output, I should probably steal the mojolicious.org or perldoc.perl.org POD rendering to make it looks actually decent.
Feel free to drop by #perl and/or #mojo on freenode if you're interested in chatting more :D
Thanks, appreciated! About the jasonette, WDR is not a markup language. Actually it is the opposite. It tries to not use markup on the data but put everything related to the design and markup inside the render.
I think a great use case for this is for working with files locally. I had to change the script tag to use a relative link, but then it opens up fine as a local file. Now I can share those files via DropBox or the like, so it's very end-user friendly.
If you were writing a desktop application, you could be saving user data as this JSON-HTML quine (JHQ?), so it's automatically accessible on any device, but also still accessible to e.g. the jq utility.
Since any modification is liable to move the "_" field, you might address that by using an array envelope instead; [{"real": "data"}, "<html><footer>"]
For a production site, I think you ought to, server-side, check the Accept header and pre-render the conversion.
Another note: if I do Save Page As in Firefox, it saves the page as standard HTML, losing the JSON.
While the source looks like JSON, the page is still sent by the server as text/html. Browsers are so resilient to malformed markup that actually parse the content inside the <html> tag at the end of the document (firing the JS script that render the actual page), ignoring all the (pseudo) JSON around.
The page is rendered in QUIRKSMODE because the source is missing the <!DOCTYPE html> at the beginning of the document, to resemble valid JSON.
yes, the initiator is a valid HTML with a `<script>` pointing to the script that receives the JSON and render the page. The interesting part is you can `fetch` the page directly and it will respond a valid JSON.
This is super cool and clever. I’m working on a personal site where I’ve been trying to build in some unexpected fun. If I borrow from your technique I’ll be sure to credit you!
> The JSON must contain only pure information without any concern about design or markup. All the design and markup required to render the page must be inside the rendering script.
The source code has markup in the form of Markdown, e.g. ## or * text
Yes, this is a good catch. The reason for this is the basic render that I created is generic, and it is not what you should do to render your JSON. The render should fit perfectly with the JSON. The basic render is just for a quick start.
For example, the basic render "# Title" creates an h1 header, but if you created the render, you know the internal schema of the JSON, so you can use the correct HTML element for that without recurring to this hack.
But the principle is important: try not to use markup on the JSON, because this would make it difficult for others to consume your website with JSON.
"try not to use markup on the JSON, because this would make it difficult for others to consume your website with JSON."
Since there is no standard for representing rich text in JSON, if you want rich text, this is simply an unavoidable problem. At least Markdown is semi-standard and people can get libraries for it. Embedded HTML would also work. Defining an ad-hoc rich text embedded would be worse.
You are correct. Sometimes, some markup is inevitable. You can try to break down the information to render separately, but it is not always possible. So I think a little bit of markdown would be an acceptable compromise.
At this point, browsers should allow non-markup files - specifically JS or WASM - to be the base of a web page. Browsers will already automatically add in <html> and <body> into the DOM if they're missing on a page, so literally, you could have a website that's just a <script>//do DOM stuff</script> and it'd be parsed and run without issues.
The index page at this point is mostly just a DOM skeleton on which to hang references to CSS, media, scripts and metadata. We might as well cut out the last step already.
If there was an official HTML-to-JSON format, we could even use that as well. It would probably already exist, but the question is always what to do with node attributes, text nodes and child-nodes. There's a dozen ways to organize them in JSON.
A similar approach is (over)used for codegolfing, namely the PNG bootstrapping method [1] [2] [3]: your code is compressed into a PNG image and the bootstrap in HTML gets appended to the PNG file to decompress and eval the code. The exact method requires some experiments, as browsers change their rules to determine which is a valid image and which is a valid HTML.
I created this tool with a very odd necessity: I hate animations and overlay ;). I would like to have access to the website data directly and build my own interface, but for this, I would have to have access to the information only. I know that the website could expose an API, but WDR feels much more natural and easy (try to do a `fetch` on the page and get the JSON instantly).
Now that I have built and have played around a bit, I don't see how can I build a website in any other way.
For me, the website's information is important, not the design. WDR exposes that.
I like the idea of separating the content from the presentation, although working with JSON directly to create the content, to me, is not fun.
Did you play around with just using markdown directly and embedding html tags in that in a similar way to format the page? I can't imagine myself intentionally writing web content using JSON. I'd probably have some other format up front that would be converted to JSON, but at that point I may as well just write some static page generator to create the html.
I think it is early to make more conclusions, but I had to adjust my way of thinking to work with WDR. We are so accustomed to thinking data+markup that it is hard to imagine any other way. So, I started to create the JSON data without thinking about the design at all. "What information I would like to see on this page", and I created the JSON thinking only on that. It turns out, creating the render afterward was much easier and faster (and pleasant). It is, at least for me, a new way to think about the page.
This makes a lot of sense and I agree with this need! I want to throw in a healthy dose of skepticism about whether your solution can address that need -- only because I hope that hearing it early can increase your chances of success!
I think a hard problem ahead would be how to convince other people to also adopt this framework, and how to make sure that people are using the same JSON keys to mean the same thing. I'm a little skeptical that this can happen on its own, because we've already tried a standard that was supposed to designed to convey just the content of the website without the presentation -- HTML.
I'm curious how you plan to prevent your JSON format from being (ab)used for presentation purposes, where people add extra content to make the page display a certain way. And if you have a plan, is this plan feasible using HTML as well?
>I think a hard problem ahead would be how to convince other people to also adopt this framework, and how to make sure that people are using the same JSON keys to mean the same thing.
At first, I was thinking to use some kind of general schema, but those things never work. So I decided something much simpler: the render determines the schema. This is an important aspect that I am working on. If you choose some specific render, your JSON will have a specific shape. For example, if the render is for a landing page, the JSON could be something like "about", "products", "team", etc. (never <section>, <h1>, etc...). But it is too soon to tell I'm still thinking about it.
(but an interesting corollary is that the render could be also a program with the "config" part. It would be an alternative to website builders. Every render would be a different WB tailored for the website)
> I'm curious how you plan to prevent your JSON format from being (ab)used for presentation purposes, where people add extra content to make the page display a certain way. And if you have a plan, is this plan feasible using HTML as well?
I don't have a plan for that yet, and it will be difficult because we are accustomed to mixing data+markup. But I think the mindset is to stop thinking the website for just the browser. The website could be information first, presentation later.
Honestly, I think letting people include their own code is good, people will always want to do things that you couldn't have imagined yet, and for their own good reasons.
However, you need to make sure the platform can eventually catch up with the de-facto standards people are coming up with, so that people don't keep having to keep on re-inventing these wheels. As a comparison, see how JavaScript features evolved from jQuery and Underscore features.
> But I think the mindset is to stop thinking the website for just the browser.
I agree. I'd suggest that you think through how you can convince people to stop thinking about the website as just for the browser.
In the last decade, many people might have thought that the rise of the mobile web would made web developers separate presentation from content. But instead of using the same HTML with different stylesheets, what actually happened is every major website started maintaining two completely separate websites, one for web and one for mobile! I'd recommend thinking about why this happened, to make sure your framework doesn't nudge people down the same route.
It's a neat trick using the Browser's Quirks Mode to load JSON that can bootstrap a framework, I'm not sure I really understand the benefit though. In order to actually get to the point of bootstrapping, the server has already transferred all of the content, plus the bootstrapping Javascript in a single payload. It seems like this implementation has the weaknesses of using a Javascript framework, without the benefits. Together with all of the weaknesses of static content, without the benefits. All the while depending on a quirk of the browser for support, which not all browsers may support.
The idea is to separate the data in JSON completely to be consumed outside the browser if needed, but still render correctly in the browser. The traditional javascript framework only solves the latter.
I tried to solve the quirks mode, but it is kind of hilarious that the browser needs the odd <!doctype html> to flip to standard mode. There are ways to inform the doctype on the headers or XHTML, but this would complicate a simple solution. But I was quite surprised how consistent is the look on different browsers, at least in the modern ones.
> The idea is to separate the data in JSON completely to be consumed outside the browser if needed, but still render correctly in the browser...
I'll accept that answer. As controversial as this comment may be, XML/XSLT is a very good fit for this purpose. It might not be particularly modern, however it's almost universally supported.
From what I can see, your JSON still contains "style" information in the form of Markdown. So this doesn't really separate information from design. If we're going to accept Makrdown as minimal styling, why not just use regular HTML. Which technically is already separating between information (HTML markup) and design (CSS).
> An object is an unordered set of name/value pairs.
But this site seems to be assuming that the key/value pairs are parsed in their original ordering for everything to display properly. Is it safe to assume JavaScript will always parse the keys in order?
And then JSON.parse is defined as doing the obvious and only sane thing, setting each property as it goes.
Consequently, so long as your keys are not numeric, yes, JavaScript guarantees that it will all be in order.
But that’s for JavaScript. It is incorrect to treat JSON objects as ordered, because various libraries in various languages will discard the order for various reasons (e.g. efficiency, DoS resistance), since it is defined as being not significant. If you care about the order of things, use an array instead.
The website uses the basic render, that is just a start to make it easy to create a page quickly. But ideally, you create a custom render, that will not depend on the order.
At my previous job I worked at a whole UI framework (server side) based on XSLT transformations. We built kind of abstraction layer on top of ExtJS,HighCharts and some 3D rendering lib at that time. It was hell to debug anything in there because the output was HTML sprinkled with JavaScript and it was not always easy to identify what part of XML and XSLT was responsible for given part of output. It was quite successfully used in one internal ERP mashup app that sucked data out of some SAP system.
Learning XSLT actually helped me understand C++ templates (big aha moment).
So we had this XSLT monster crunching on XML files but what we really needed was JSX which probably wasn't a thing until few years into the project.
You could use this to syndicate blog posts, sort of like RSS, except each entry is JSON and could be viewed as it's own page rendered by it's own renderer (carried by a CDN and cached by the browser), or by the renderer of the users choice.
Or you could... you know.. do type negotiation per-request based on the Accept header, and serve the rendered markup sans JSON bullshit to clients requesting markup, and the JSON data to clients requesting JSON.
I know it's a complex idea, and it's hard to grasp ideas when they're first proposed, but the payoff is often worth the time invested. Even though it's only been <checks notes> 24 years since the RFC defining the Accept header was published, maybe it'd be worth spending say, 10 minutes reading about what it is, rather than however long it took you to write this abomination?
Not related to the project but I'm working on jsonblog.com, your whole blog can be rendered from a blog.json file. Will look into WDR to see if it could be relevant.
My immediate thought for a use case is to debug APIs. Pass in a param that adds this to the response and get it in a more human readable form. Will test it out.
No, you can't. Or create your own browser that can. Remember, HTML, JavaScript, CSS, XML, JSON, etc etc - are only instructions for an interpreter (or even compiler/linker if you feel the need).
I'm curious about caching potential with this approach.
Meaning: If the javascript (and other content) loaded via #render is highly cacheable, can this lead to pages that display as soon as the JSON is loaded?
I tested only on small websites, but the performance that I am experienced surprised me a bit. The render is very small and fast and as you said, like the JSON page (that it is also very small because there aren't any HTML tags), very cacheable. More testing is needed.
It would be neat to auto load HTML templates from files since if you edit them like this, you lose all the great tooling your IDE can provide to edit HTML.
if you put the initiator at the top and also add "<script>
window.stop();</script>" the page start rendering immediately.
I have the same idea but for running simple x86 linux binary in the browser. anybody interested?
{
"#info": {
"title": "WDR"
},
"subtitle": "Web Data Render",
"title": "# WDR",
"what": {
"title" : "## What is WDR?",
"description": [
"This website is a valid **[JSON](//www.json.org/)**!",
"Check the source code. Instead of the habitual HTML and CSS, you will see just a plain JSON with the website's information.",
"WDR is a format to separate the website's **information** and **design**.",
"The website is readily available to be consumed outside the browser via JSON, but also still presentable to users accessing through the web browser."
]
},
"subscribe": "I'm creating a **blog platform** using this concept. Follow me on [Twitter](//twitter.com/gpiresnt) to be notified when is ready! ",
"how": {
"title": "## How it works",
"description": [
"It works by embedding a small initiator at the end of the JSON file.",
"For example, this is a valid JSON and web page:",
"```{\n \"title\": \"Example Page\",\n \"description\": \"This is an example.\",\n \"#render\": \"<html hidden><script src=/render.js></script></html>\"\n}```",
"The script `render.js` receives the JSON as input and is responsible to render the page."
]
},
"usage": {
"title": "## Usage",
"description": [
"First create an HTML file with the JSON information:",
"```{\n \"title\": \"Example Page\",\n \"description\": \"This is an example.\"\n}```",
"Include the initiator at the bottom:",
"```{\n \"title\": \"Example Page\",\n \"description\": \"This is an example.\",\n \"#render\": \"<html hidden><meta charset=utf-8><script src=/render.js></script></html>\"\n}```",
"The next section explains how to create the `render.js`."
]
},
"only-data": {
"title": "** Pure information **",
"description": [
"The JSON must contain only pure information without any concern about design or markup. All the design and markup required to render the page must be inside the rendering script."
]
},
"create-render": {
"title": "## Creating a render",
"description": [
"Create a new javascript project and install the package `wdr-loader`:",
"```npm install wdr-loader```",
"Call the `loader` function to retrieve the JSON:",
"```import loader from 'wdr-loader';\nloader(data => render(data));```",
"Create a `render` function to handle the data and render the HTML. Below is an example using simple `innerHTML`:",
"```function render(data) {\n document.head.innerHTML = `\n <meta charset=\"utf-8\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\" />\n <title>${data.title}</title>`;\n\n document.body.innerHTML = `\n <section>\n <h1>${data.title}</h1>\n <p>${data.description}<p>\n </section>`;\n}```",
"The `wdr-loader` code is available on [GitHub](//github.com/webdatarender/wdr-loader) with an example."
]
},
"basic-render": {
"title": "## Basic render",
"description": [
"If you don't want to create a render right now, it is available a basic render (used on this very website) to immediate use:"
],
"download": "[render-basic-1.0.3.js](//webdatarender.com/dist/render-basic-1.0.3.js)",
"instructions": [
"Just download the script and include it on the initiator directly:",
"```{\n \"title\": \"# Example Page\",\n \"description\": \"This is an example.\",\n \"#render\": \"<html hidden><meta charset=utf-8><script src=/render-basic-1.0.3.js></script></html>\"\n}```",
"The code is available on [GitHub](//github.com/webdatarender/wdr-render-basic)."
]
},
"remarks": {
"title": "## Remarks",
"description": [
"• The page is rendered in [quirks mode](//developer.mozilla.org/en-US/docs/Web/HTML/Quirks_Mode_and_Standards_Mode) and can present some layout differences on different browsers.",
"• Although javascript is necessary to render the page, most search engines, like [Google](https://developers.google.com/speed/pagespeed/insights/?url=webdatarender.com) or [Bing](https://www.bing.com/webmaster/tools/mobile-friendliness), will be able to read the page correctly.",
"• If you want to display the JSON for users that have javascript disabled, you can include `noscript` at the initiator: ```<noscript><style>html{display:block !important; white-space:pre}</style></noscript>```"
]
},
"support": {
"title": "## Need Help? ",
"description": "If you need help, have any feedback or just want to say hi, send me an [email](mailto:gpiresnt@gmail.com)."
},
"about": {
"creator": "Created by [@gpiresnt](//twitter.com/gpiresnt)",
"logo": "![logo](/img/logo.png)"
},
"#render": {
"_": "<html hidden><meta charset=utf-8><script src=/dist/render-basic-1.0.3.js></script></html><noscript><style>html{display:block !important; white-space:pre}</style></noscript>",
"css": "css/main.css"
}
}
- the content-type of the page is "text/html", so the browser is trying to render html
- there is no special meaning of the #render key to the browser (again the browser doesn't know this is json)
- browsers are very fault tolerant so they'll just skip over your document till they find pieces of html
- the JS made by the author is the part that parses the json as json, and it uses the #render key as its metadata section
You can try opening a file like this to test for yourself to get a sense of just the browser-parsing part, test.html:
I do however get this warning in FireFox so perhaps this is pretty fragile:> The character encoding declaration of the HTML document was not found when prescanning the first 1024 bytes of the file. When viewed in a differently-configured browser, this page will reload automatically. The encoding declaration needs to be moved to be within the first 1024 bytes of the file.
I'm guessing differently-configured means not in quirks mode? Does quirks mode just make the browser extra fault tolerant?