
API server and a static front end – the future? - eatonphil
http://blog.eatonphil.com/2015/10/02/api-server-and-a-static-frontend-the-future/
======
justin_vanw
Probably not? Generally there is a need for more efficient orchestration than
you can achieve without per-use-case endpoints (in other words, you need to do
some interdependent operations such as database access calls that depend on
previous database accesses, or what have you, and those are too expensive to
do due to round-trip latency over the internet, so you do them server-server).
Whether the endpoint is serving JSON vs HTML, which is basically the
distinction in this article, is irrelevant.

I mean if you just make the same endpoints on the server that you would have
made back in he HTML serving days, but instead serve JSON, and then call that
an 'api', then sure, yes, but then it's a distinction without a difference,
you are just moving the _rendering_ of the data into html from the server to
the client, this only a small a api, not an API as in 'the thing you use for
generic 3rd party programmatic access to your features'.

Also, rendering html on the client is silly and has no benefit, no matter how
hip it is. It's not faster, it's not more efficient, it's much harder to test,
it forces you to use a terrible language (javascript) instead of your-choice-
of-any-language (which can still be javascript), it makes capturing errors
harder, it dramatically increases the amount of code that has to be cross
browser compatible and also has to support older browsers, and ditto all the
bug stuff for performance. It's just a completely stupid idea that people are
doing for no clear reason at all, generally because they have little
experience and are just cargo culting whatever the cool kids with blogs are
talking about.

~~~
xg15
I give you everything, except efficiency. As an example, I did a view source
of the blog entry linked in this submission. The instance of the page that I
got was 22914 characters long, of which 2172 characters were actual article
text - roughly 10%. If I as a user wanted to view more entries from that blog,
I think I'd be very grateful if I only needed to download the remaining 90%
once and not again for each blog entry.

~~~
justin_vanw
There are 2 things wrong with your analysis, IMO.

First of all gzip will reduce this considerably. Both will end up much
smaller, but also the % difference will be reduced, since the html version is
likely to contain much less entropy per bit.

Secondly, in my version of how to do it, you can still use AJAX. It's fine to
load blog posts via AJAX, and in that case you aren't reloading the
boilerplate of the page? I am saying to return rendered html in the response
rather than returning json and rendering it into html in the browser. The
specific data returned can be the same either way, and the overhead of html
over json in the case of a set of blog posts should be tiny, and depending on
how much markup the posts themselves contain, the html version is quite
possibly going to be smaller.

------
Wintamute
> The most obvious issue with this architecture is service startup-time. Since
> data is no longer embedded into HTML pages that are served at once, our
> static front-ends must make API calls and retrieve data before the page can
> be allowed to render.

Universal JS apps can help here. Render the JS app page server side using a
thin Node server, hydrate the fully formed DOM on the browser and run it
client side from there. Truly, the best of both worlds.

~~~
eloff
What idiot downvoted this? This is _the_ solution for companies using single
page sites. You render the page server side using some headless browser and
serve that on the first load (or always for search engine spiders, or people
with JavaScript disabled.) That way you don't suffer the latency penalty on
the first load and subsequent loads work as usual with a single page app. I'm
pretty sure I read that twitter does this, but it might be some other big
player.

~~~
sanderjd
It is definitely _the_ solution, but I've yet to find _the_ setup that makes
it cheap and easy to do. Links welcome!

~~~
firasd
React will render HTML on the server side using the same code and data
structures you use for dynamic components on the client side.
[https://facebook.github.io/react/](https://facebook.github.io/react/)

~~~
GordyMD
To demonstrate this point for those not familiar. Here is an example of using
React + Redux on the server to render with an initial state.

[https://github.com/GordyD/3ree/blob/master/server/universalA...](https://github.com/GordyD/3ree/blob/master/server/universalApp.js)

------
jamiesonbecker
This isn't the future. _This is the present._

We built Userify[1] (ssh key management for EC2 & elsewhere) entirely using
this model (REST API) because it gives us a clean way to inspect incoming data
and respond with pure JSON.

Since it's just JSON over HTTPS, clients are all easily implementable in any
language (i.e., Javascript and HTML5 for the web app, beta client SDK's
currently in Python and JS, more on their way), or just a curl shell script.
(The agent[2] is just a single Python script with no dependencies.)

And, now, with great tools like Phonegap/Adobe Build/etc, you can convert your
responsive mobile web app into an Android or iPhone app in seconds
(performance is no longer an issue, either).

Best of all, you maintain a clean separation of concerns and isolation between
your API resources and the front end.

Compared to the server-driven ways from the last century, this is so much
better; we can provide a full desktop app experience, make it feel more like a
web page, or anywhere in between.

AJAX changed everything. This is simply better.

1\. [https://userify.com](https://userify.com)

2\. [https://github.com/userify/shim](https://github.com/userify/shim)

------
apexkid
Can't believe such an uninformative post is trending#2 here. Literally didn't
taught me anything new.

------
nivertech
_> API server and a static front end – the future?_

It's not static front-end, but SPA - Single Page Application. It's the part of
the larger Serverless / Backend-less computing trend.

We already have:

\- static websites (i.e. served from Amazon S3 or Google Cloud Storage)

\- static blogs and CMSes

\- frameworks for SPAs (Ember.js, Angular, etc.)

\- Firebase and the likes

\- Amazon Lambda and the likes

And yes - Serverless is the future.

 _> Write the back-end initially in RoR or Flask, move it to Elixir or Scala
when your servers start crashing._

why not to write it in Phoenix/Elixir from the start? I don't think it's any
more harder than RoR.

------
franzpeterfolz
Well, I just disabled JavaScript with ScriptSafe recently due to security and
privacy concerns.

And there are many blank pages I get to see, because of this kind of
Architecture and CDN's. Nowadays even blogs with mostly static content don't
work. That's realy annoying.

If you depend on search engines, remember you get penalties for this kind of
sites. You need to have some kind of prerendering to satisfy you
ChiefSearchEngineOptimizer.

How do you support Bookmarking in a Single Page Application? I think this is
an essential feature and not trivial to implement correct.

I tried Angular. It works. It looks nice and smooth, but I don't think it is a
good solution. JS is nice as an enhancement, but I don't like it as a
dependency to use the web.

===Addit Not so long ago we have had similar approaches to JS-SPA. They were
called Java Applets and Flash. These Technologies are dead. Everything you do
today with JS was 10 years ago possible with these technologies. The only
constant in the web that's gonna stay is HTML.

JS is overused. There are so many things you're better off without JS, like
Blogs, Hackernews or simply valueable content. There might be usecases for
SPA, but a high percentage are better off without these cool and fency stuff.

Eliminating Serverside Rendering for scalability reasons as a first step is
often followed by implementing prerendered content in script-Tags or via react
on node. The only thing that gets scaled is complexity.

~~~
harel
If you disable JavaScript, I'm afraid you're not the target audience of 99.99%
of the commercial/content internet. The web is no longer static, and websites
are now essentially applications. This is a generalisation, but it mostly
holds true I think.

As for bookmarking - any dynamic single page content site and many
applications will support linking directly to a certain content or section of
the app via various techniques.

JavaScript is not a nice enhancement. It IS the web.

~~~
acdha
It's fine to think of JavaScript as something that you depend on for major
features but I still think you have to follow progressive enhancement. A
small, but potentially growing, number of users have JavaScript disabled
completely but a much larger number of people effectively have it disabled for
a non-trivial period of time while things download or even, due to errors,
until they reload the page.

If you don't degrade well under those circumstances you're at a competitive
disadvantage to sites which do something like quickly return a core HTML page
which can display immediately while the full app loads.

------
jrochkind1
The solution to the one problem he mentions is straightforward, isn't it?
Embed the data needed for initial page load in the initial HTML delivered by
the server. Perhaps as JSON in a <script> tag.

I'm not too familiar with the new front-end frameworks, do any of them support
this seamlessly? If not, I'm surprised, it seems like a fairly straightforward
thing to architect.

What dismays me more about the rise of the static front-end single-page-app,
is it really means the death of the idea of the architecture of the web. It's
going in the opposite direction from the 'semantic web' dream, as well as
REST.

~~~
teen
I do it this way with angular / express / server side templating. You render
the title/ meta tags serverside for crawlers (fb, twitter, google), and drop
your initial request payloads in a script tag. Everything else is clientside.
It took me a long time to set this up and get it right though. However, it's
fast as hell, and really quick to develop on.

~~~
mst
Working on the assumption that the crawlers will index the JSON in the script
tag?

~~~
jrochkind1
Crawlers indexing is a different problem, and not the one the OP was talking
about -- the OP's problem is specifically speed of first page display, and
even more specifically speed of first page display due to having to make AJAX
requests for data to display the first page.

But crawlers and "single-page" Javascript apps are another potential problem,
sure, although not the one the OP discussed.

But wanting actual HTML in the original non-JS delivery for crawlers is I
guess why people have followed the 'universal JS' approach instead of what
we're talking about here.

Other comments in this post suggest that Google, at least, does fine with
Javascript-generated pages these days. I have no idea myself.

~~~
mst
I was replying to a comment that specifically mentioned crawlers:
[https://news.ycombinator.com/item?id=10325666](https://news.ycombinator.com/item?id=10325666)

------
bhsiao
Isn't this more or less what happens in a native app? The only difference is
that on the web each view is identified with a URL. If so, history.pushState
should make this trivial.

------
donatj
There are SO many cases where just delivering a prebuilt page would work so
much better, and yet they don't. It's incredibly frustrating. Generate the
HTML _once_ server side and cache the hell out of it. Generating it EVERY PAGE
LOAD for EVERYONE is cheaper for you but costs a lot more CPU time for
everyone else. It's a waste of energy, it's a waste of battery, and I'd go as
far as to say immoral.

~~~
cosmolev
I would generalize to say computers are immoral!

------
sbov
Why do I care to scale my front and back end independently? I'll just toss
another full stack server on the heap of others and be done with it.

~~~
Guzba
Well, that's sort of missing the point. If you have a fully static front end,
you put in S3 and never ever worry about it again. Literally ever. Then, you
have a json api that both your website and any mobile apps talk to. Makes
websites and apps into the same api consumer model. Takes the number of moving
pieces down quite a bit actually.

------
yuvadam
s/future/past

Nothing about this 'architecture' is new in any way.

------
jacques_chester
Yesterday's post:
[https://news.ycombinator.com/item?id=10319822](https://news.ycombinator.com/item?id=10319822)

I was under the impression that identical URLs were rejected as dupes?

~~~
dang
On HN a post is only a duplicate if the story has had significant attention in
the last year or so.

We do reject identical URLs for a few hours to avoid stampedes of popular
posts.

[https://news.ycombinator.com/newsfaq.html](https://news.ycombinator.com/newsfaq.html)

~~~
jacques_chester
I think I mentally crossed wires with Reddit.

Is it OK to regularly submit twice in two days? I see that pattern with OP.

~~~
dang
I think we invited eatonphil to repost a couple of stories.

Reposts aren't necessarily a problem if the story hasn't had attention yet. On
HN we want to optimize for curiosity, which means getting the best candidate
stories in front of the community. Since /newest by itself does a poor job of
that, we've been experimenting with other approaches.

~~~
jacques_chester
Wouldn't easier to give yourself a super-vote of some kind? It'd reduce
thinly-veiled grumbling from jealous folks like yours truly.

~~~
dang
Devices like that and/or rolling back the time decay on a post only work while
the post is relatively new. Once it's past a few hours old, it looks weird to
have it on the front page with only a few points.

More at
[https://news.ycombinator.com/item?id=10325217](https://news.ycombinator.com/item?id=10325217)
if you're interested. It'd be great to find a better solution and we're open
to ideas.

------
volaski
Thanks for the profound insight, captain obvious.

