Hacker News new | comments | show | ask | jobs | submit login
Show HN: Sapper.js – towards a better web app framework (svelte.technology)
410 points by rich_harris 7 months ago | hide | past | web | favorite | 209 comments



I think this premise is wrong:

> 2. As a corollary, your app's codebase should be universal — write once for server and client

This is a mistake a lot of developers make. The server and client are not the same. As far as rendering HTML goes, the client does a superset of what the server does.

This means either the framework has to be without leaks; meaning you never need raw DOM access at all, OR it means once you do need lower-level access (for doing animations, for example) you are in trouble. You now have a problem where you have some code that needs to only run in the client. So you have to special case that code to check if running on the server. And you need a strategy for what to do in the server (do you just omit this part, or do you try to do something more basic or what?)

In my experience you run into more pain trying to plug the leaks in these "universal" apps than you would if you just had separate server and client templates. Remember that the rule of 3 is the rule of 3, not the rule of 2.

https://en.wikipedia.org/wiki/Rule_of_three_(computer_progra...


You really want multiple layers of "server" code, one of which is a tweaked version of what's on the client, and one of which is purely server-side.

For example, you can't trust client-side code for security/authentication purposes. You almost certainly don't want client code to connect directly to your database and run arbitrary queries/look-ups. So some kind of server-side-only wrapper around your DB, at least, is required.

But the reason you'd run "client" code on the server is to have a layer of universal HTML-rendering code. The server needs to do it for fast initial page loads (and SEO), and the client needs to do it for fast, offline page loads.

At the same time, there's a bunch of code that will run on the client side only, e.g. click handlers, animations, etc. There's essentially nothing for that code to do on the server, but I think it's still basically correct to call it "universal" when you mean "client code that can also run on the server" which is what Sapper provides.


This has a name and it's the BFF pattern (backend for frontend). It's main reason for existence is just to do SSR. It might also do some level of service orchestration where it pulls API responses from multiple microservices and munges them into a single page model to reduce round trips from the client.


In practice, this isn't an issue. Component lifecycle hooks and methods (which is where anything involving raw DOM access or animation takes place) never run on the server — the SSR renderer just generates some HTML for a given initial state. Once you grok that, it's easy. Certainly much easier than maintaining two codebases in parallel!


Then you have a server/client difference if the lifecycle hooks change the dom (like a jQuery plugin or something). So you have to figure out what to do on the server.


The server is only involved when you first hit the page. After that, the client takes care of all the rendering. There is no discrepancy!


There's a discrepancy if the user doesn't see something when the page first renders but then does see it once the JS runs.


That would be poor coding or application design. There’s no reason for that to happen with SSR. At all.

There’s especially no reason for it with modern frameworks like React/Vue/etc - it’s not been a technical problem for years.


With Ember and React at least, it's the same if you use 100% client-rendered elements; the component/template renders to the page, then a lifecycle hook is triggered, and then your custom code/plugin runs.


Honestly these are solved issues. It’s 2018.


The client handles the dynamic aspect of time, whether that's event handlers that may be invoked later or animations which may run at some point and for some length of time. The server only needs to handle it's default, but as other commenters mentioned, lifecycle hooks help manage when some logic gets run.

I think it does take some time to think universally, but it's not all confusion and hair pulling.

In fact, it kinda reminds me of writing testable code. You need to separate some concerns, but ultimately, it's a set of conventions that help you get there.

I can tell you, there are plenty of times when even a rule of two isn't worth the maintenance effort that's caused by rendering HTML in two different places.


If it doesn't make sense to render on server (last child tag for Google maps? YouTube? Calendar datetimepicker?) Then wrap it in a simple <no-ssr> tag that has existed for a long time in all major frameworks supporting SSR. Interesting problem, but when it ever rarely actually shows up, it has an obvious, practical, and proven solution.


Is it stranfe that I actually prefer two codebases? I find the hard split (or just client/server architecture) to provide clarity, focus and peace of mind.


In Universal React based apps, some of the lifecycle methods only run on the client side, e.g componentDidMount. So you just perform client-side-only functions there. There's also react-no-ssr - https://www.npmjs.com/package/react-no-ssr.

It is an issue, but one that is really easy to handle.


The same applies to Ember apps as well. Honestly, this is not an issue.


I’m not sure I’d share a lot of code between server and client even if I could, but sharing the odd domain type, some utilities etc might be handy.

The universalness I want is that I want one language, package manager, toolchain, IDE etc for all the code I’m working on. Worst case would be to have the two platforms being slightly different. Say two IDE’s with slightly different shortcuts, two curly brace languages with slightly different syntax, two different build systems you need to integrate in the CI etc.


Why's that? Why would you want your server response and your client-side app to be different?


Why is what? Why I would not share code?

Server response, do you mean in the context of a server that does server side rendering? I was thinking in the context of a normal app: api backend and (possibly multiple) front end rendering apps


Ah, apologies. I meant server response as in the server-rendered version of your client-side app, not your backend server.


Can't agree more. It's El Dorado for web developers.

You might be able to write all your app code in Javascript, but the inputs and outputs are so different that you never really avoid "if(isServer()) {...} else {...}". You can't contain that in a single file, either. It surfaces in the most unexpected places and gradually erodes the illusion of "universal," until you're never sure in which order or what environment your code is executed. You'll have pieces of code written and rewritten again and again and executed several times just to make sure it gets called in the exact moment, with the exact arguments you need, at least once.

I spent many years in search of "universal" JS, and worked with many people who were convinced they had found El Dorado, yet always ended up in situations like I described, so maybe I'm jaded.


This is not the reality though... with React and Ember (presumably also Vue) there is one place where client-side only stuff happens, and it's in component lifecycle hooks. It's perfectly clean and makes sense.


It was and is a reality for many people and was my very frustrating reality for quite a while. The concept of "client-side only" code that does anything more significant than animations (such as, say, modals) breaks down if you want deep linking, and deep linking requires full data reloading on every action, because how can you be sure the data is all there if you don't load it, and once you're reloading your entire state tree on every action you're already slower (not just in data transfer but in code execution) than the "traditional" client-server pattern, and if you have a UX team and any timeline pressure at all, you've already written yourself into several corners for UI bells and whistles, which unfortunately affect the server side as well now, not to mention being at least a major version out of date on several NPM libraries, and just another rewrite away from clean architecture, and so on...


So you have 2 versions of the code with a shared contract between the two. Isn't that exactly what the original criticism was, other than in this world your server and client both have packaged code that is likely redundant or at least not a clean abstraction?


We have one version of the client-side code and one version of the backend API code. The server-side code mounts the client-side code like a pure function for rendering on a user's first arrival. You can technically separate the SSR from the API if you want (might even be default in some places). "You have 2 versions of the same code" is a simple misconception from people who still think in terms of jQuery at best, at worst it's intentionally ignorant and misleading.


"2 versions of the same code" is not a very precise description. Either it's one chunk of code or two chunks, or one chunk that extends the other chunk in some way.

What we ended up in my case was one chunk of code (server) that extends a shared chunk (server/client), with plenty of server vs. client branching inside the shared chunk, either on a boolean flag or the presence of some data structure that only exists in one environment.

I have never seen the use case that many people flippantly claim is so easy, which is the "one chunk" concept, nor have I seen a chunk of server- or client-only code extend shared server-client code in a way I would describe as clean. Stuff tends to get messy when you have to deal with the space between the server and the client, like cookies, asset loading, or third-party services, or when you have to do fancy UX tricks. Then users who are used to "traditional" websites do things like refresh the page at weird times, click buttons more than once, load the app from the wrong URL, use the wrong browser, click things too fast, leave a tab open too long, and so on. And then, as others have mentioned, you have sensitive logic that needs to live only on the server for business reasons. Then the framework you're using moves too quickly or not quickly enough to support a browser feature you didn't know you needed until right now. All this stuff only comes in later, once the app meets the ground, and is rarely discussed in initial architecture meetings. The only way I could possibly see these solutions as cleaner than the old "two-chunk" way are if I were to automatically dismiss all those pragmatic concerns or consider them to be cleaner simply by virtue of the fact that they're different.

It's frustrating to me because I see so much optimism on online forums about "Universal" JS as this earth-shaking sea change on the verge of happening, even though honestly it's been 10 years since NodeJS came out and first made Universal possible. It makes me feel a bit like an old fogey for throwing up my hands and saying, "Let's just let it go and stick to the original design of the web," meaning, two chunks. But that's the only way possible for me to deliver the real requirements I have on projects without driving myself crazy.


Apparently I’m working in El Dorado then...


> This is a mistake a lot of developers make. The server and client are not the same.

This is the basic value premise of frameworks. Write less code with uniform conventions and make it easy by hiding the complexity. Unfortunately, the premise is faulty.

You can only lie to yourself for so long that your ignorance of how things really work is fine because of (name your favorite abstraction). This is a very direct example of tech debt. By ignoring how things really work the application will gradually get bigger and slower until it is an incompetent mess like paying off credit cards with other credit cards.

The reason for this snowball effect is largely due to a lack of confidence that feeds upon convenience as opposed to examination of why technical problems exist in the first place.


> The reason for this snowball effect is largely due to a lack of confidence that feeds upon convenience as opposed to examination of why technical problems exist in the first place.

I find this a great answer.

Much time is spent these days on taping over problems at the high level on the top of broken stacks, instead of fixing the underlying issues.

Building on broken foundations will only get you so far and yield diminishing returns, even if it initially looks easy and convenient to just build on top.


> the rule of 3 is the rule of 3, not the rule of 2.

But, to quote from your link:

> and may or may not if there are two copies.


Sorry, I couldn't resist. Second and third paragraphs, with some minor editing:

HTML is close to this ideal. If you haven't encountered it yet, I strongly recommend going through the tutorials at https://www.w3schools.com/html/. HTML introduced a brilliant idea: all the pages of your app are files in a your-project/pages directory, and each of those files is just an HTML page.

Everything else flows from that breakthrough design decision. Finding the code responsible for a given page is easy, because you can just look at the filesystem rather than playing 'guess the component name'. Project structure bikeshedding is a thing of the past. And the combination of SSR (server-side rendering) and code-splitting — something the React Router team gave up on, declaring 'Godspeed those who attempt the server-rendered, code-split apps' — is trivial.


If you don't mind reloading the entire page on every single navigation, and don't have any dynamic data or interactivity, this is indeed a very solid approach.


But this is where the entire argument of "do first page load server side because it's faster" doesn't make any sense.

Because then aren't you suggesting that after that, all the other pages can be slow?

The worst sites I visit now are React.js like sites, instead of the page load being slow, _everything_ is slow. Ugh. I can't wait until web 3.0 and this silly balogna with the "make a browser with React and run it inside a browser" fades out...

(note, Netflix _removed_ their js rendering from their home page cause it was slow)


Server-side rendering means you get a quick first load. Client-side rendering means subsequent navigations are quick, because there's less data to transfer (maybe a little bit of JSON, maybe nothing at all). Going back to the server for 100kb of HTML and reloading the entire page, as opposed to fetching 10kb of JSON and instantly updating it in place, is a very 2002 way of doing things!


>...because there's less data to transfer...

The problem is that just the theory, reality if often times different.

>Going back to the server for 100kb of HTML and reloading the entire page...

This implies that 1) you don't go "back to the server" to load your bit of JSON. and 2) the JSON loaded doesn't require any extra templates, images or more JSON to be loaded. See the problem? It's all assumptions.

Another thing, what about browser caching? I can see how the next page loads _only_ the html, as all the js, css and images were loaded already. I think you are falling for the exaggerated speed benefits from making a website with a JS framework.

To be clear, I am talking about a web page, an HTML document, static content. Maybe that's where the confusion happens in discussions like this. (I said "sites", not "webapps" in my original comment)

A history lesson, we did this with Flash years ago (no more page refreshes!), and everyone ultimately learned to hate it. Just because the flashy crap is spread over the entire "webapp/site" to every button, mouseover and click doesn't mean it's useful or better. (I think many devs think they need to build basic site features with big frameworks just to "stay modern".) Look at Youtube, it's slower now that they implemented their "webapp" experience, but the page doesn't "refresh", so that is somehow better... (is youtube really a webapp? Or a bunch of pages with videos on them?)


Ultimately, the proof is in the pudding. Which is faster, when you start navigating around?

* https://news.ycombinator.com/item?id=16052558

* https://hn.svelte.technology/item/16052558

Bear in mind that HN itself has a huge advantage over HN clones, yet — for me at least, sitting here — the Sapper version is significantly snappier.


I tried this on my phone. (Android/Firefox) And I found that it's about .5 second to navigate between pages on hn.svelte.technology And about 1.5-2 seconds with the news.ycombinator.com, on average. Sometimes the straight page load was just as fast (.5 second) and sometimes it sat there while 3-5 seconds.

Yes, the svelte is generally faster on my phone as well. (I am not sure I see the need for a framework for this specific feature though, it's just a js tabs sort of interface. It doesn't seem like a good argument piece for or against this framework)

I did get a possible error, the pages didn't match up after a while. Is the svelte data was being cached in a way that can't refresh properly? I can't debug this as a normal user, does hitting refresh do anything to reload the site content on the svelte site?

Try this, open both sites in different tabs, and navigate around on them a few times, and after awhile, the news.ycombinator will show new content, and the other one will be different. (I am guessing older content, server cache issue? I didn't compare carefully, maybe they started off different and I didn't notice)


Possibly a misbehaving service worker serving stale content! I can never configure those right first time.


Hi Rich. On my iPhone, I get a 3-second delay between when I “swipe to go back” and when the page is usable/scrollable on the Sapper HN. When I first went to the site, via the link above, the top nav links did not seem to work reliably, and it may have been the same 3-second delay, but I’m only getting now when swiping to go back — maybe because the page is being reloaded from the server? It’s still quite a delay, though.


Interesting. I get behavior like that with twitter.com on the iPad. It hangs with a blank white page for a few seconds. I always wondered what kind of client script shenanigans were going with that. Using the back button embedded in the web page does not cause the same delay.


I get the same behavior on my phone with their new website too. Service worker infancy bugs?


Hi David! That's odd — it sounds like maybe you're interacting with the nav before the client-side app starts, though it's a little hard to tell exactly. Will try and reproduce it here — thanks


Bearing in mind that HN loads content in html tables, which are notoriously slow to render. I am not saying it makes a huge difference, but it could have an impact.


Judging based on the 'load' event they are about the same, or the HN version might be a little bit (<100ms) faster. The Sapper version is faster with the service worker.

So basically the added complexity gives you the same load time when done extremely well (as Sapper is). In terms of bang for your buck, the service worker is biggest improvement. Which you can implement in your server-rendered app.


What about when you start clicking the links in the nav, or opening other stories and profile pages? As far as I can see, the Sapper version blows the original out of the water.


The Sapper version is cheating a little. It is preloading the pages on mouseOver events of the links.


It's not just cheating a little. Just moving the mouse around the page causes lots of data to be downloaded behind the scenes; you could end up loading every story on HN without ever leaving the Top list. That's not friendly to user's data plans, and it's very far from expected browser behavior.


It's configurable. Add rel=prefetch to an <a> element and you'll get that behaviour; don't, and you won't. Since the links are small, mousing over them tends to suggest you intend to click them, but the exact behaviour is subject to refinement over time (this is pre-1.0 software, adjust your expectations accordingly).

Incidentally, 'cheating' is a bit of a silly word to use for a feature designed to improve UX that people spend a lot of time trying to implement (see e.g. http://instantclick.io/)


Yep, the svelte version here appears faster. I'd have to inspect the network traffic to see what it's doing with the data. But I can see a definite speed advantage in this circumstance. To bad it doesn't work everywhere like this.

How fast would it load if every entry had an image? Maybe then it wouldn't seem so different? (edit: this is from a desktop/Firefox)


It's not just the initial page load though (although that's essentially similar speed). Navigate through the site, and you shave off seconds at every next click.


I get what you mean and you are right in a way, but raw performance is not the only thing to consider. Yes, server rendered sites are usually an order of magnitude simpler to code than these JS-rendered solutions, and are often faster, but where JS rendering shines is interactivity and perceived performances: you always know what is going on! loading, sending a text, etc (provided the UX is right) Sprinkling some client-side code on top of server rendered templates is so limitating for everything but the simplest websites!

It's 80% more effort for the 20% last performance/richness points.

it's over, people want/expect these kinds of apps, e.g feels as good as native apps but on the web.

But you're also right some people misuse that kind of technology to a ridiculous extent. Downloading a 3MB angular app just to display a crappy site without any rich interactions is really infuriating but that's just bad engineering and people who make choices based on what's popular, disregarding context.


Going back to the server for 100kb of HTML and reloading the entire page, as opposed to fetching 10kb of JSON and instantly updating it in place, is a very 2002 way of doing things!

The difference between streaming 100kb and 10kb is small when compared to the connection latency itself. Even on a low-end LTE connection (5mbps), 100kb is 20ms, whereas the latency of a cross-country (US) transit is realistically ~50-100ms. The difference decreases further when you enable server-side compression and start doing fragment rendering, which tends to remove the penalty for HTML payloads.

The argument for react-like technologies is not connection latency, but rather, the hope that you can avoid synchronous connections in the first place by moving your view logic client-side. And truthfully, few client-side apps are written in a way that would minimize payloads anyway. Most people are doing it because they hate working with the DOM and AJAX and server-side frameworks in a mishmash of web technologies, and perceive these client-side frameworks as easier.


There is also browser caching. So subsequent page loads may not require any new scripts or stylesheets to be loaded, for example, so you're only loading some GZIPped HTML.


You couldn't have said it any better.


If we replaced JSON with something less "chatty", would that propel us to 2022?


    is a very 2002 way of doing things
Despite sharing a common three-letter prefix, "fast" and "fashionable" are frequently uncorrelated qualities.


You know, we may be to specialized to see that normal users that are in majority do not care. Of course you may argue that page load and responsiveness is important in regard to the conversion metrics, be it so, but imagine some average non IT person. Your mother in her seventies. A high school friend. Your auntie. Do you think they care what technology is used? Certainly not. I've seen sites that you would label ugly and from 2000's, but they just work, users can use them easily. And I have seen a rewrite of such old site to modern js framework and it was a disaster, owner was sued by users of the site as they were missing profits because the site was slow and cumbersome.

I think the question is, who do you write the site for, for others or to satisfy your ego and show off?


How long does it take going back to the server and loading 100kb of HTML these days?


I feel like a lot of HN is stuck in 2010 and dislikes all of this new web tech because they don’t understand it.

This comment is a prime example.


Isn't the opposite true? A lot of "younger" developers are stuck disliking anything older than 2010 because it's not "new". Not because it's slower, or creates a bad experience for the customer, but because it's not new. If it's not faster, and doesn't provide a competitive advantage (I'm talking about real competition, not the pseudo-competitive market of "hire all the developers by dangling shiny new tech in front of them"), then what's the point? (There are certainly use cases where new approaches make sense, but that doesn't mean they become the new defaults)


I've been doing webdev since 1999. I've seen all manner of stupid hacks, dumb "best practices" that have come and gone, ridiculous frameworks that became _useless_ because browsers just integrated the features. Flash home screens, table layouts, IE only coding up the wazoo.

So, when someone says they have a JS framework that replaces the core of the browser's display, the DOM, my response to that is sigh, this again.

Shadow DOM is around the corner, what then?

The reason this stuff gets popular is not because it's a good idea, it's because big companies have tons of cash and man power and can't wait for browsers to get updated. Well, a lot of us code monkey vets are done chasing "shiny crappolla", and we can wait a bit until the dust settles.

How many JS frameworks are there now? I saw all of them come, where is ember, knockout, etc...? Vue just popped up, and I can't even recall the other 5 I tried out at the same time. Which will succeed? When you have to put real money into a project and the framework matters to your clients budget, and your profitability, you can't make a risky bet and expect your client to pay for it later if you are wrong.

I am all for the long term now. Show me how a 1 year old framework is a good investment, I am willing to learn something. (keep in mind React is only about 4.5 years old, Angular is only 15 months old (AngularJS is 7 just years old and already being phased out!))


> Shadow DOM is around the corner, what then?

Nothing changes. DOM remains a slow monstrosity impossible to code for. For goodness sakes, reading certain properties in a DOM tree causes a full page reflow [1]

To beat virtual DOM frameworks you have to write extremely specific vanilla JS+DOM code that basically ends up doing the same thing: re-using DOM nodes, event delegation etc. See [2]

Shadow DOM was never meant to be an answer to that. Honestly, I have no idea what it was supposed to be an answer to.

[1] https://gist.github.com/paulirish/5d52fb081b3570c81e3a

[2] https://medium.com/@localvoid/how-to-win-in-web-framework-be...


You should learn React and build a simple project with it to understand its real value, which is component-based architecture.

React will likely be around longer than/as long as any framework, but whatever replaces it will use a lot of the same concepts.


How does that help 1000 visitors to a blog read the blog easier? Not every site is an app, and the apps we do build we already have invested in VanillaJS framework. Do you think React will outlast the VanillaJS framework? (http://vanilla-js.com/)

(edit: this is partly a joke comment, fyi)


I understand the joke but JS isn't a framework, it's a language.

"No framework" generally means "poorly-written custom framework". Do you have any code samples to share from your vanilla JS apps?


>"No framework" generally means "poorly-written custom framework"

This thinking is part of the problem: the idea that not-using a framework almost necessarily means you're writing garbage.

>Do you have any code samples to share from your vanilla JS apps?

I can show you bad code from popular frameworks too--in particular, hacks to get around the "magic" that frameworks offer.

And, invariably, we move on from the framework because now there's "something better". And, we say "that's just progress". No one wants to admit that the last framework was just bad.

It's a trend that's been going on for decades in programming, and not just with JS frameworks. Stick around long enough and you'll see exactly how predictable it is.


"No framework" generally means "poorly-written custom framework".

That is not generally true at all. It might be true if you've got a very inexperienced and unskilled team of developers who don't know how to build a non-trivial application without the structure that a framework brings. However, there must be millions of developers in the world who do have sufficient skill and experience to do that, and these days plenty of those people work in web development. I imagine they can still remember how to do things like basic software design and using libraries, which have been working since long before any of these modern frameworks existed.


I'm afraid you're being terribly optimistic. I'm not totally sure how many millions of people, in absolute numbers, are even competent developers. Out of those, some are web developers, and some subset of those are doing things without frameworks. That leaves a lot of yahoos.


Well, yes, there are plenty of incompetent developers too, as there always have been since development broke out of its earliest new-and-specialised-job days.

Even so, the idea that not using a framework automatically means you'll produce some poor imitation instead is laughable. The history of software development is replete with examples of people succeeding without those frameworks. We've built countless UIs that way that were much bigger and more complicated than anyone has ever built with Angular or whatever this week's flavour of JS framework is.

Maybe there are only thousands of developers working in the web industry who could do it rather than millions. I'm not sure that undermines my real point, since you'd only need a handful of decent devs to make and maintain almost any web front-end I've ever seen.

I suspect the true problem is that for various reasons we lose a lot of that experience and skill from the industry far earlier than we should, and then yet another young generation of developers who haven't studied history are doomed to repeat it, badly.


I agree - have yet to see the benefit of using a framework that aren't at least 2 years. Never do client work using newly released frameworks (if that isn't specified as a must by the client). The result will come back to bite you sooner or later.

I like the idea of universal code, maybe Sapper will be the solution that comes out winning. I'll let the community be the judges and hop on the train a year later if that's the case.


Or it may well be that this "all new web tech" was created by people who do not understand neither web nor HTML. Sudenly everything needs to be SPA, every site needs to load hundreds KBs of scripts just to "be current". What's the value of being modern if it makes user experience objectively worse? Sure client side render may prevent you from reloading the entire page. And who cares that this rerender actually takes longer than reload? Did I really need to waste 200KB of my mobile data to load scripts all I wanted to see were 5 pages each weigthing 5KB? What did I gain from trading full reload for the a-la-SPA kind of deal?


>Sudenly everything needs to be SPA, every site needs to load hundreds KBs of scripts just to "be current".

Not everything has to be an SPA, but React is 30kb gzipped and Preact et al are in the single digits.

>What's the value of being modern if it makes user experience objectively worse?

The point is they don’t inherently make user experience objectively worse.

>Sure client side render may prevent you from reloading the entire page. And who cares that this rerender actually takes longer than reload?

This isn’t how it works. Rerenders can be incredibly fast.

>Did I really need to waste 200KB of my mobile data to load scripts all I wanted to see were 5 pages each weigthing 5KB? What did I gain from trading full reload for the a-la-SPA kind of deal?

What did you lose? You lost nothing because SSR provided the initial page load as quickly as any other HTML.

It sounds like you’ve generalized the entirety of modern SPA development from your experience of a few terribly developed sites with poor user experience.

You probably only notice the poorly-built SPAs because the UX gets in the way. i.e., confirmation bias.


> The point is they don’t inherently make user experience objectively worse.

Ah, but they do. Browsers are very mature and predictable in their management of html pages and SPA's break that behavior in subtle, uncanny-valley ways.

For instance, click into the Sapper HN page https://hn.svelte.technology/item/16052558, scroll down to a post with a random link and click into it. Click on another page in that link. Now click the back button in your browser to get back to the sapper page. You find that you will be at the top of the page as it re-renders. You lose the place you were scrolled to.

This does not happen with standard cached html pages in the browser.

One of the worst offenders is google's home page. When clicking into the links looking for something, clicking back will take you to top, again and again, losing your place. This is a big step back in usability that people are not even aware is happening. It is a small thing individually, but in aggregate it adds up to a large thing.


> You find that you will be at the top of the page as it re-renders. You lose the place you were scrolled to.

What browser is that happening in? In Chrome it works correctly — Sapper uses `history.scrollRestoration = 'manual'` for this exact reason, but maybe it fails elsewhere? It's a bug, and we should fix it. (This is pre-1.0 software, your forbearance is appreciated.)


Way late on this reply, but it was in chrome. I found that an immediate back button would often work, but if the page you were navigating to was overly large or you clicked into another link or two and then went all the way back it would lose place.


>You probably only notice the poorly-built SPAs because the UX gets in the way. i.e., confirmation bias.

Maybe you are too young to remember Youtube before it was turned into a webapp. It was _considerably_ faster. Todays version requires all sorts of odd refreshing bits and pieces. The old one just loaded right away and showed the video.

If I was paranoid, I'd think that Google did it on purpose to push Youtube Red ads at us.


Agreed youtube as a webapp brings no value whatsoever. Someone at google wanted to rewrite it with newer techs to update its CV? :)

PS: It downloads that crappy polymer that weights more than 1.5MB... ffs


"Right tool for the job", as the old saying goes.

There are sites that indeed are applications, with lots of updates occurring dynamically on a page: a chat, a dashboard, any type of control interface. They do benefit from the libraries that make highly dynamic things efficient.

There are sites that are best served as static pages. This includes the server component, too, BTW. A static site generator producing plain HTML + CSS is often a good replacement to something like Wordpress.

Some people use latest trending tech for no good reason, just to show off. But the problem here lies not in the tech used.


Agreed, don't make SPAs out of sites that work well as traditional websites. Browsers and the internet are fast at serving up html & css these days. Even PHP rendering is pretty damn fast. And it best fits how browsers work for traditional sites.


Agree with you on this. Many (most?) sites work just fine with the time-tested method of server-side rendering. When taking compression and browser caching into consideration, you are typically not sending much data across the wire after the first page load.


Possibly a lot of HN has been around the block for a couple of decades at this point, and has seen this web dev stuff come full circle several times, and before that, the same ideas in desktop software.

Computer science programs really need to introduce something like a "History of Computing Ideas" course as a mandatory element. Maybe we can get away from rehashing the same cycle of trends every five years and move onto some different wheels.


> Computer science programs really need to introduce something like a "History of Computing Ideas" course as a mandatory element.

Isn't that the whole programme?

In my final year I had a couple of more advanced courses in which we could just about get our heads around results published by the lecturer four years ago when we started the programme. Everything else is just working up to that, from Boole's ideas; through Turing's and von Neumann's.


You're looking at this from a more academic perspective than my comment implied. Really, it's the whole clusterf&*^ of software engineering being conflated with computer science, when they are really not the same thing in any way.

There is a lot of practical implementation history that people are often woefully ignorant of, and it leads them to tread the same paths over and over and over.

    - Here are the core GNU command line utilities; make, sed, grep, etc.  They are battle-tested, use them, don't rewrite them a dozen times in JavaScript
    - Here is the progression of version control tools that people have used over time, and some reasons why we've moved on from SourceSafe and rcs to Git.
    - Here's the history of how people have done databases, flat-file, vs sql vs nosql
    - etc, etc, etc.


Ah, I see what you mean. Apologies, I didn't mean to digress from the spirit of your comment, I suppose it just varies with experience: of the things mentioned, including web frameworks, I _did_ hear of the history of how people have done databases, but other than that GNU tools are just things I used to get the work done, and I learned git through open source. (Though I think it would make a great 'stuff you've learned put into practice' talk: diff algorithms, data structures, etc.)


Yes, though I think we need to diffuse the sentiment via organic cultural osmosis rather than relying on ivory towers to indoctrinate for us (not effective).

Alan Kay frequently laments the direction of the computer industry, calling computing "not-quite-a-field" and "pop culture". From an interview in 2012:

Quote:

----

> Kay: [...] [I] happen to believe in history. The lack of interest, the disdain for history is what makes computing not-quite-a-field.

> Binstock [interviewer]: You once referred to computing as pop culture.

> Kay: It is. Complete pop culture. I'm not against pop culture. Developed music, for instance, needs a pop culture. [...] The big problem with our culture is that it's being dominated [more by] pop-culture content than it is for high-culture content. I consider jazz to be a developed part of high culture. Anything that's been worked on and developed and you [can] [sic] go to the next couple levels.

[...]

> [Kay continues]: [P]op culture holds a disdain for history. Pop culture is all about identity and feeling like you're participating. It has nothing to do with cooperation, the past or the future — it's living in the present. I think the same is true of most people who write code for money. They have no idea where [their culture came from]. [sic - brackets in original]

---

It's a great summary of the programming fads that have come to characterize tech culture and how disappointing it is for anyone who is trying to navigate this field with some intent and presence of mind higher than fitting in with the crowd or cashing in on a perceived gold rush.

It's also just a great memorialization of the concept of pop culture in general; pop culture is about a feeling of belonging and community, and engaging positive biological responses associated with social approval and community. It's something that people do to give themselves that rush of praise, and then retroactively try to justify. After a couple of years, no one can understand what anyone was thinking because they were just rationalizing riding the tide.

And, IMO, it aptly describes not only the React phenomenon, but also Kubernetes (aka "shared hosting"), Docker (aka "static linking"), etc. Also IMO, big dollar players stoke this negative culture because it's profitable for them.

It's not necessarily that these technologies won't leave their mark, it's just that they're not the revelations people pretend they are.

Regardless, suspicion is always warranted when people insist that the contemporary is novel, in computing or practically anything else.

[0] http://www.drdobbs.com/architecture-and-design/interview-wit... \ archive: https://archive.fo/DkQOc


Are they going to also teach that at bootcamps?


I'm still waiting for my web framework in pure ASM idea to manifest.


I am willing to concede your pedigree as a developer as necessary but I've been writing all manner of languages and environments since 99 and I'm sorry to say there's no revolution here, no battlefield to die on. It's a poor model that has been tried and abandoned any number of times in my career. The ubiquity of platforms that suit these ideas and developers who think Web development began with node and spas isn't confirmation that every idea is the future.

Moreover, the notion that because FB or whoever went "hmm nothing solves our domain problem, we have the resources to solve it ourselves" that suddenly they solved every domain problem is equally naive. It's likely that people quoting react stacks are solving issues that never existed for them with libraries they don't need.


This is an incredibly common stance, and it's also very obviously problematic. No one is saying that your experience in software is invalid or that you're wrong about "these modern frameworks" having been wrong in their time. However, if you aren't yourself working as a frontend developer in a modern startup-ey development environment, you aren't in a position to make so many assumptions about the problems such developers do or do not have. Surely, as professionals and adults, we can all appreciate that we are each dealing with problems in the world for which people believe we deserve to be paid. We all take what we do seriously, and are trying to build the best tools we can to do the best work possible. Even if something comes in as a fad, what good is it to bawk at a perfectly valuable learning experience?


My problem is that every JS-framework story on HN is full of comments either a) showing a lack of understanding of the tech like the parent, or b) bringing up "fundamental issues" with SPAs which are either nonexistent or long-solved.

That isn't to say some concerns might not have merit, but SEO/SSR/shared routing/state/templates are problems with production-ready OOTB solutions.

If someone isn't familiar with the basic SoTA then they should refrain from commenting so harshly on it.


Your comment expresses a really common sentiment, but it also vague and offers no supporting evidence.

When was a model like React + Redux (or Elm) used previously, and why did it fail? The combination of factors (browser advances and iteration on existing models) is unlike anything before. This tech stack did not exist in 1999, it did not exist in 2005, and it did not exist in 2012. It is driven by real problems.


When was a model like React + Redux (or Elm) used previously, and why did it fail?

What do you mean "a model like React + Redux"?

Programmers have been separating underlying state, rendering and interactions since at least the earliest days of MVC in the '70s.

We've been diffing model states to identify changes to act on for as long as we've had explicit isolated state management, which again is several decades.

We've been identifying specific parts of a display to rerender fully because doing the whole thing was too expensive since the earliest graphical video games.

We've been using markup languages to describe UI layouts declaratively for a long time too.

Adopting similar techniques in browsers was likely as soon as serious front-end coding to build larger client-side web apps became viable, and they have much the same pros and cons now as they did in other UI development before. React and its surrounding ecosystem may have been the first JS libraries to popularise the ideas for a new generation, and IMHO those early versions of React were well done and useful, but let's not pretend the underlying concepts are radical and new in themselves.


Yes, but that's never been attached to a frontend that reaches billions of people.

But like you said, those are good ideas, they didn't fail.


Why does the number of people using a UI make any difference to whether the UI does its job well or whether the underlying technologies are appropriate, though? Would any of the arguments for different ways of building UIs change materially if Facebook had a million active users instead of a billion?


I'm just saying that even with that model before, we never had tech with the reach or capability as we do now, which makes that interesting.

If it was a good model for building UIs in 1970 and it proved itself then, we should be taking advantage of that on the frontend as well, instead of ignoring it.


In that case, perhaps our views here aren't so different. I'd agree that React was a relatively rare qualitative step forward in the tools available in web development, precisely because it did bring some ideas and programming styles into the front-end JS community for probably the first time. I just disagree with the idea that this was unlike anything that had gone before, because essentially the same fundamental ideas and programming styles have been used elsewhere in the programming community for a very long time.


The real problem React solves is enabling many people to work on parts of the same page without stepping on each other. The declarative nature is brilliant, but if the messy MVC diagram slides in the reveal presentation are real, then I feel confident in labeling React this way.


When the entire page is 50kb of compressed text and some cached images, I don't mind reloading it on every single navigation (at which point dynamically generated HTML is more than adequate for dynamic data). It's only when devs insist on shoveling 5mb of compressed JS libraries into it that it becomes an issue.


If you get <200ms Time To Interactive, working middle mouse click and back buttons, in which scenario wouldn't you want that?


>HTML is close to this ideal

HTML is anything but ideal. In fact, it's the core of the problem where Web applications are concerned. HTML and its kin (particularly CSS) were designed for content-oriented Web pages, not Web applications.

Continuing to devise Web-app frameworks that are predicated upon developers wrangling HTML for controls and CSS for layout will lead to more iterations of frustration; only to perhaps slightly lesser degrees. It doesn't matter if you write JS that renders it or templates. It's the same problem.

What's needed is a component-framework with proper tooling (including IDEs) for layout and design. We've solved this with Swing and even Visual Basic. So, for the life of me, I still can't figure out why we haven't moved in this direction full-on for Web frameworks. If the issue is that HTML and CSS are standards with broad support, then I concur: but compose the app via a proper GUI and let the tooling generate the HTML/CSS.

It's odd: we use Webpack, Browserify, etc. for full on Web-app builds, along with various cross-compilers for JS. But, at the end of the day, we're still writing HTML-based templates (or JS) and CSS.

This is generally an unpopular view on HN. But, I can nearly guarantee that the next truly meaningful advancement in frameworks will be via a component-oriented approach that abandons direct manipulation of Web standards like HTML/CSS altogether.


I agree with your view, but I suspect it hasn’t happened yet for a couple of reasons:

- every time it was attempted, the generated html/css was terrible (think old ASP Net for example), and maintenance hard. Abstractions only work when they are not leaky

- designers, devs and users in general expect to have custom design for everything, rather than standard controls. Something to do with “have a unique brand and not look like everyone’s else app”.


>the generated html/css was terrible (think old ASP Net for example)

We've come a long way in tooling, libraries, and in the underlying standards. Technologies like ASP also pre-dated SPAs and were targeted for server-side development; hence make for a poor analogy. Swing would be a more apt comparison, and even Visual Basic.

In general, poor execution in the past is no reason to drop the objective. We've tolerated plenty of bad HTML-based frameworks and years of CSS-quirkiness, yet we keep plugging away at it.

>and maintenance hard

But the actual objective is that you wouldn't hand-maintain the generated code. We're currently so bound to HTML-thinking that it's hard to embrace that notion.

>designers, devs and users in general expect to have custom design for everything

I've heard this. Interestingly, though, we look to tools like Bootstrap and design-motifs like Material, etc., which emphasize uniformity. Also, there's no reason branding can't be applied without directly manipulated CSS. A good component-based library would allow for that and would also allow for the construction of extensible custom components.


How many of you actually built a React/Redux stack SPA that actually came anywhere near what Sapper offers?

The file size problem is underrated. Not everybody walks around with 5G like in South Korea. Not everybody has the latest and greatest Samsung or Apple phone.

It's a big problem when you advertise PWA as quick and fast but then in tiny letters mention its using the best case on best network and best phone. If you design without worst case in mind, you've discriminated people based on whether they can afford the best or not.

The vast majority of users don't. It's only recently that Android phones have gotten cheap that its seeing widespread usage. These people are not going to wait around for the React/Redux stack to load AND still wait on more data coming through their 2g/3g connection. Oh, and there's slow rendering on older Android phones and now you have a laggy Javascript app experience that Lighthouse tells you everything is okay.

Let's not be so quick to judge. It's not as if Sapper has deviated from the trend that even React team is acknowledging. This is a fresh approach to a quickly monopolizing space, from an organization that has other agendas in seeing their tech stack flourish.

I'm gonna definitely be watching Sapper.js in 2018 and if they deliver on most of the 11 criterias, I know my customers and their customers will be happy.


You can boil down react to virtually nothing. preact-compat turns it into 4kb. Redux + router don't add that much. The loading impact, if you watch out, code split, isn't the dire problem.

As for laggy experience, Fiber is the only feasible attempt at the moment to actually tackle it. Microbenchmarks will not help and neither Svelte nor anything else will make a webapp fast on a slow phone as baseline diffing has never been the problem in the first place. The problem is and has been the dom and the single threaded handling of it.

Fiber will allow apps to schedule. Imagine an app with 100.015 rows that Svelte renders in, say, 500ms, let it be 10ms faster than React 15's 510ms. It would still eat into your frame budget and cause heavy jank like all web apps do. Fiber just wouldn't render the 100.000 rows that don't fit your phones screen finishing the whole op under a single ms. It culls and prioritizes similar to how virtual lists operate in general, but you get that as a first class primitive being able to defer any render call of lesser importance without causing visual distortion or code complexity.

Not to mention that with React you could actually produce a real native app, while redux would allow you to share most of your code. With something like react-native-web you could even write a universal app being able to render both in the web and natively.


> Imagine an app with 100.015 rows

i can imagine an app like this. and such an app would have been designed with 0 thought given to the actual usability by humans. the proper solution is to provide sorting, grouping, filtering and pagination. your UIs should be small enough to remain fast without insane optimization tricks. mobile and even desktop browsers are plenty slow at rendering/reflowing 100k DOM nodes; the fastest way to render this is simply not to.

it's not as if suddenly you don't have to worry about the shitty UX and performance of your app's architecture "because fiber".


The list is an example, the cheapest one to make it understandable. Let it be 2000, or 100, filtered and pre-processed. It would still take longer than the 15 the user actually sees. The point isn't the list, but the primitive.

I don't even know where the scepticism comes from, culling and priotization is normal on the desktop. You've been offloading UI related tasks that could interrupt animations or interactions in C# forever - it is a great tool to have and getting it for the web is a major step. Microbenchmarks that measure baseline deletes/adds/changes will simply not result in apps that feel native. Thinking framework xyz will just feel acceptable to mobile users because Krausest says it takes 1ms less adding 100.000 rows, that is just silly.


I would think that some combination of sorting, grouping, filtering and pagination would still make for a more user-friendly experience.


I don't think there's anything more user unfriendly and annoying than pagination. Pull/click/scroll to load have been made to help it. But that aside, the list isn't important. It could be dialog items, complex tree-views, plugins, graphs, visuals in general, etc.


Haha yeah. The magic of green threading is so misunderstood: Ok, it no longer blocks your UI thread but it takes AT LEAST as long as before to render (browsers still don't have an API to reliably schedule small chunks of work without also incurring the cost of setTimeout clamping)

So it's not really an improvement for me, your component tree becomes harder to reason about (when does this component renders??), you may see very unwanted progressive rendering where you didn't want to, etc. Many vdom implementations are faster than React, synchronous perfs matter a lot too.

Even the core React maintainers acknowledge this is kind of like an experiment.


Some bullet points are missing from the “perfect framework” list:

1. SSR must support streaming

2. SSR must work in a service worker (so in fact you have 3 targets: server, client/hydration, client/service worker)

3. All work that can be offloaded to a WebWorker must be offloaded

4. Navigating to subsequent pages must support prefetching, including dynamic data (Next.js doesn’t even tell you how to do it)

5. Navigating to subsequent pages SHOULD support prerendering (offscreen) where it makes sense, but…

6. MUST support FLIP-style transitions[1].

Which does mean prerendering anyway for about 100ms at least so the layout can stabilize and cached images load into memory.

For long pages you need to split them into chunks/sections and only animate the part that gets shown (usually above fold).

[1] http://www.pocketjavascript.com/blog/2015/11/23/introducing-...


This is an excellent list, thank you. To answer some of the points:

Sapper supports partial streaming of server-rendered HTML (typically you'll get most of the <head> while it's fetching data for your page, unless there's no dynamic data in which case you get the whole lot instantly).

I'm not convinced there's actually a benefit to 'server' rendering in a service worker as opposed to serving a shell page. We have an issue for it though! (https://github.com/sveltejs/sapper/issues/22)

Sapper supports prefetching (including of dynamic data) of subsequent pages — just add rel=prefetch to <a> elements (we'll add a programmatic way to do this in future as well).

As for the rest, some of it is app-specific, some of it is stuff the framework could help with. We're not at version 1 yet, so bear with us :)


It should be “free” with the right architecture, and it should be faster, that’s why I included it for the “perfect” framework list.

However, it’s only a win on first page load on a subsequent visit. So definitely not a priority for an actual implementation.

BTW one easy way to implement this would be to flip the usual model:

1) Have service worker/server as main renderers

2) On navigation do a fetch to service worker and patch the page

So you’re on /main, do fetch to /page2?from=state and get patching instructions.

If you could give the service worker readonly access to DOM without copying this wouldn’t actually be that stupid.

It might be possible some day:

https://github.com/whatwg/dom/issues/270


As always, my main concern with this is traction. Will this be well supported, with a good community, 4 years down the line? I know React will, just because of the massive amounts of business depending on it. When building something real, i have to think about this because i need to be able to hire developers and actually develop my product instead of rewriting it.

That aside, the idea of svelte is cool, and i'm happy they're continuing to build on that. Excited to see what comes out of this!


I think the big benefit here (and most of Rich Harris's projects) is less "will this be the next big thing?" as it is pushing the client-side app world forward with new ideas and questioning status quo. Rich balances these "wild ideas" with solving practical problems and I'm always impressed.


As much as i want to like it, i have troubles thinking of templates as something that questions the status-quo. The other big thing seems to be loading effort, but react has gotten so small (react+react-dom = 29kb, react+react-dom-lite 15kb, preact-compat 4kb), there's not really an impact any longer. And as for performance, could Svelte even approach something like the Sierpinski demo - something that would allow React to finally breach the gap between native and web performance?

When look at fiber and react-reconciler, it seems to me react is already contemplating the future. The reconciler especially.


You mean this Sierpinski demo? http://svelte-sierpinski.surge.sh/

Ok, I'll level with you — that's not actually doing the same thing as the Fiber version. But that's it's basically impossible to accidentally slow down your Svelte app in the same way as the Fiber demo depends on. Fiber doesn't really speed things up so much as it prevents bad code slowing things down.

The innovation isn't templates (though these aren't your grandad's templates), it's compiling those templates to lean, memory-efficient JS code that doesn't depend on virtual DOM reconciliation or anything like that.


The whole point is the artificial slowdown. Taking it out makes it meaningless, it prints a few blue bubbles. Scheduling is and always will be the biggest bottleneck. It is that kind of innovation that keeps react relative.

As for a lean dom representation, i have never seen or heard of memory related problems regarding v-dom. And won't byte-code make it leaner in any case? React-compiled is already being tested.


"lean" is not limited to memory though - walking down the vdom reconciliation tree can incur get some pretty large overheads in terms of nested JS calls, just to decide that only a few buttons have to change. PureComponents and things like that are meant to help with this, but it still can take quite a bit of engineering effort to get this right.

Mind you, I don't actually know if Svelte avoids this problem, but a statement like "doesn't depend on virtual DOM reconciliation or anything like that" kind of implies that it defaults to less work.


Svelte is not "templates" (i.e. client-side), it compiles to JS modules


Whatever it compiles down to in the end, it's stringly-typed templates with a custom syntax and weird assumptions about code that break everything you know about Javascript: scoping rules, variable declarations etc.


Okay, but JSX is not 'just JavaScript' either - it's a DSL embedded in JS. You could make the same argument that Svelte is 'just HTML' with a script block, style block etc.


Nope. True, JSX is thin XML-like DSL on top of Javascript. However, it uses Javascript everywhere. It never breaks assumptions about scoping, hoisting, where variables come from etc.

You can skip JSX and write `React.createElement` everywhere, and nothing will change. Your code remains Javascript code. Unlike whatever svelte/sapper is. More details in a different comment: https://news.ycombinator.com/item?id=16053685

Nothing about svelte is "just HTML with styles and scripts". It's a yet another custom weird templating language with its own custom magic binding rules and a Javascript-like scripting language which breaks all assumptions about Javascript (methods/properties hoisting, invalid scoping rules, automatic data/variable injection etc. etc. etc.). And it's also just strings everywhere. And magic strings (as in "add $ to tell svelte the property comes from a store").


Like typescript, flow or even es-next. An optional DSL that transpiles to pure javascript isn't an obstacle to programming. A string-template with an arbitrary syntax is an obstacle on the other hand.


svelte templates compile to "pure javascript".


All template engines do. JSX doesn't get compiled but transpiled which makes all the difference. Therefore it works with the language, uses latest es-drafts without problems, is 100% typesafe as it can work with other supersets like TS or flow, it also doesn't rely on dependency injection and all the other annoyances.

The biggest reason for me to prefer it: JSX is a simple but elegant solution to a decades old problem. A function signature was all it took to allow it to become cross platform and independent of the browser. A template engine comes at the expense of simplicity and flexibility. It needs so many abstraction and circumventions to function, injecting scope, evals, parsers, foreign syntax ... and for what, it most certainly doesn't make things easier.


Oh yeah I forgot to factor in the irrefutable fact that Rich is a genius and everyone else who tried building a full stack framework before him was an idiot.


Ask not what your framework can do for you, but rather what YOU can do for your framework.


Massive amounts of business were also built on Angular 1.x but it is no longer supported. To be honest, I am tiring of seeing a new javascript framework everyday. I am pretty sure I saw a post for something called StimulusJs in the last hour and now this.


Sorry, but that isn't correct. Angular 1.x had a release just over a week ago. 1.6.8 was released on December 21, 2017.

https://code.angularjs.org


I think he is implying that since Angular 2 is out, 1 is guaranteed to be gone, and no longer supported in the future.

Who would intentionally use an older version of a framework, where the upgrade path breaks compatibility?

Angular 1.x is in essence, dead. It's just going to take a long time to go away.


As far as i know, even though it was the hype at the time, angular 1's traction was not quite that of react's. All of the FB family (messenger, instagram, fb etc), including the apps are on it. AirBnB is on it (including the apps), Uber's on it, etc. That kind of business gives me peace of mind. Angular 1 wasn't used in large google apps, as far as i know most of them are still gwt.


Stimulus.js is first and foremost for Rails devs. Not very interesting for the rest of us.


You can use it with Turbolinks without rails. I have seen elixir and laravel devs using Turbolinks, not sure if they will adopt stimulus too but I suppose they could.


Actually the nice thing about stimulus is that it is deliberate in ignoring the hype cycle. It works with Turbolinks and rails core to add the small but missing interaction from a traditional ssr framework.


I think I've been using Rixh's previous project, Ractive, onr of the first virtual DOM implementations, for 4 years now. It has a decent community and good stack overflow love.


I've been following Rich's work on Rollup, Svelte and now Sapper with interest and great awe. I truly believe he's onto something.

At the same time, if I were designing something like this, I'd use a tool like Flow or TypeScript in a heartbeat. Doing AOT optimization is so much easier when you have type guarantees, I'm sure that once you go deep enough you get into all kinds of little issues that are only issues because JS is both dynamic and very quirky.

Rich, if you're reading this, did you consider this and if so, why not?


Thank you.

Svelte itself is written in TypeScript so I'm a believer. I'd like to get first-class support for TypeScript in components at some point. Adding .ts support for the non-component parts of a Sapper app should be fairly straightforward (it's just a webpack config after all) — will add that to the TODO list.


> Svelte itself is written in TypeScript

I completely missed this as I looked at the sample project and saw js files - this has me going back and taking a second look, so you might want to mention it.


Curious, but since you're a TypeScript believer, and venturing into "build yet another JS framework", why not make it TypeScript-first or even TypeScript only?

Granted, you'd loose a large number of JS-only potential users, but seems like it'd be very differentiating (and unique?) to be TS-first, and market/specialize within the large niche of TS developers/teams.

Vs. the usual TS-last/TS-kinda-sorta approach that, again AFAICT, all the other JS frameworks take.


Learning a new framework is intimidating enough without potentially having to learn a new language on top of it. I work in the news business, among people who in many cases learned the bare minimum of JavaScript necessary to visualise data and tell interactive stories, so that's my natural constituency — beginner-to-intermediate/casual programmers, most of whom don't even know what TypeScript is.

There are TS-first frameworks, such as Angular, and they're off-putting to people who just want to build something.


Couldn't agree more on the praise for "Rollup, Svelte and now Sapper". Well done Rich, keep going!!!


> the framework should do automatic code-splitting at the route level, and support dynamic import(...) for more granular manual control

I disagree! Code-splitting is arguably an architectural dead-end for SPAs:

1) It adds a non-trivial layer of complexity to both your application logic and your tooling. SPAs (and the JS ecosystem in general) are already complicated enough without this extra layer of indirection, and that complexity carries a higher cost than most people realize, especially when it's blindly accepted across the industry as best practice.

2) It's based on the conflation of apps with collections of hypertext documents, whereas in reality these are different architectures with different trade-offs. We're under no such delusions for apps in the app store for example, and thus nobody bats an eye if one takes a full minute to download.

I guess this isn't what people want to hear, but the web is more than apps, and we need to stop pushing SPA as the default way to build every new website. Code-splitting is merely the by-product of choosing the wrong architecture. In cases where SPA is the right tool for the job, we need to make users aware that they're downloading an actual app. I.e. a one-time install screen implemented via some kind of service-worker-driven bootstrapper: "Just a moment while we install your app..."


It’s an interesting idea. From the case studies you’ll find on the web, it looks like Progressive Web Apps want to play minimal bootstrap time as as card against native apps, but it’s true that re-use of existing web know-how could be enough arguments in their, although native app developers are getting almost as easy to find as web developers these days.


Isomorphic frameworks are a bad idea. I'm convinced that any attempt at making a single general-purpose web application framework will fail. I've seen it happen too many times before.

SocketStream, Derby, and even MeteorJS which had massive funding and marketing just couldn't do it. NextJS is just the next generation of fools who didn't research the market... Surely we can at least wait for NextJS to go belly up before launching yet another project that's going to waste several more lifetimes of human effort.

There was nothing wrong with the implementation of the MeteorJS project by the way... The problem was the idea itself. All the marketing and funding it received only served to delay its inevitable demise.


From what I’ve gathered, both Next.js and Sapper.js are less ambitious than Meteor and Derby. They require to write your own data access code by hand, by making HTTP requests or importing whatever server-side JavaScript libraries you need, while Meteor and Derby tried to provide the same model API on the server and on the client. Maybe the model API (and the fact that you were almost forced into storing everything in MongoDB) was the greatest limitation.


THIS.


If the "compiler as framework" idea sounds interesting to you, and you don't mind a little Scala, this paper might also be of interest to you: https://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web...


"compiler as framework"

This really does sound like we're slowly reinventing lisp


"While it's true that citing 'ecosystem' as the main reason to choose a tool is a sign that you're stuck on a local maximum, apt to be marooned by the rising waters of progress, it's still a major point in favour of incumbents."

Yet the article makes clear that React is working on implementing some similar technologies. Is there anything in svelte / sapper that cannot eventually be implemented in React / next.js too? I'm more than willing to wait a year for it to show up in React rather than live on the bleeding edge...


So far, the React team's work is focused on optimising your app code using Prepack (kind of like Angular's AoT compilation, or similar forms of preparsing that have existed in other frameworks for a few years) — you still need React itself.

It's far from clear that it'll ever be possible to compile a React app to something that doesn't need virtual DOM reconciliation, with all that entails.


Curious: why use Webpack rather than your own and IMHO much superior bundler, Rollup?

I’m excited to try this more but I have mixed feelings about Svelte. The concept is brilliant but I feel like there’s too much API and some weird gotchas. I hope there’ll be an effort to simplify it if possible.


It uses code-splitting, dynamic imports and hot module reloading, which are all well supported in webpack. We definitely plan to support Rollup as well, once it has those features — it should be possible to shrink down JS payloads by a reasonable amount. (Maybe Parcel too, eventually.)

I think the 'too much API' feelings might be more unfamiliarity than anything. The API surface area is much smaller than anything like Angular, Vue etc, and although it's a fundamentally different approach to React I'd argue Svelte has a much shallower learning curve there as well. We're very open to feedback and suggestions though!


Maybe it's just the documentation that is lacking, but when I've played with Svelte, the API and template syntax feels like a hodgepodge of special cases.

For example, what exactly does a tag starting with colon mean? Is it just arbitrary syntax for things that don't fit in? What does a colon in an attribute mean? (i.e., why on:click rather than onclick or onClick or even something simpler like :onClick?) Why is there both {{#if condition}}<p>Content</p>{{/if}} and {{{condition && '<p>Content</p>'}}}? I feel like all these different concepts break the "it's just HTML" promise, when it seems like it could be simplified a lot.

Yes, JSX has similar gotchas, but at least it has references and specifications.

And what's up with the crazy semantics of computed properties?

Sorry if this sounds negative, but these have been genuine problems for me in getting a hang of Svelte.


Colons indicate that something is a directive rather than an attribute — so `on:click` is the `on` directive with the `click` event, and `bind:thing` is the `bind` directive with the `thing` data property. The one place we violate that slightly is with `:foo`, which is shorthand for `foo={{foo}}` (since people dislike the ceremony of passing props down between components, and this makes it easier).

{{#if condition}}...{{/if}} tells Svelte something about the structure of your app. {{{condition && '<p>Content</p>'}}} doesn't, and you can't add interactivity to the <p> if it's just a string. Moreover, if `condition` is `undefined`, then that's the string that will get rendered to the DOM. Generally, {{{triples}}} should be used for blobs of HTML you get from data sources, such as a blog post.

If you can overcome your distaste for the computed property dependency injection, you'll hopefully find it's an extremely easy and compact syntax for declaring arbitrarily complex graphs of properties. The advantage of doing it this way is that Svelte can generate, at compile time, very efficient code for updating computed properties without any wasteful runtime dependency tracking. I realise it's slightly controversial (because it's a Svelte idiom, rather than something in JS itself), but for those who have embraced them, computed properties are one of the best features of Svelte!


The problem with Svelte's computed properties is that it's not just an idiom, but a completely new language that happens to look like JS. It does things JS can't, and breaks very basic things like functional composition.

Why not simply use the "correct" ES6 syntax to do the same thing? It's almost identical:

  computed: {
    hours: ({time}) => time.getHours(),
    minutes: ({time}) => time.getMinutes(),
    seconds: ({time}) => time.getSeconds()
  }
This way, the compiler could simply optimise idiomatic cases where it's easy to see which data is depended on, without breaking the language.


Interesting idea! I've raised an issue, thanks — https://github.com/sveltejs/svelte/issues/1069


At first, this looked like a bad idea but now can see the value but also see the ugly and less-friendly to newcomers angle.

Trouble is, coming from a Vue background (where computed props do not have to rely on a state item), the pre-requisite in Svelte to do so seemed at first a major PITA (then I saw the light/benefits and was actually easily able to re-work the Vue versions to Svelte).

Unless this is the only place Svelte goes slightly 'off-piste' in terms of JS then personally I'd leave as is, otherwise make the change


That syntax looks arbitrary and frankly, hell to mantain. The cost:benefit ratio of learning all these idioms ("idioms", a huge red flag) doesn't seem efficient at all. I apologize for being blunt, yet I still recognize your work as great for pushing boundaries in web development.

What made you turn away from javascript and into templates?


Simply put, you can do more with templates. It's the Principle of Least Power at work — the same way you can do more with a blob of JSON than a blob of JavaScript, templates allow you to do things that are basically impossible with JSX, such as compiling to a string concat function for server-side rendering that is much, much faster. Ask the teams behind Glimmer, Marko, and other tools, and they'll tell you the exact same thing.

We've basically covered all those idioms in a few paragraphs. There's very little extra stuff to learn. Now if I may be blunt in return, I was converting the React RealWorld implementation to Svelte for the purposes of this post, and there were moments that I spat out my coffee at how absurd some of it was — twice as much code, with some truly bizarre (but idiomatic-to-React) constructs. It's all a matter of perspective and familiarity!


Hi Rich, I notice you mention Marko. That was my first thought when checking out Svelte today - Marko uses somewhat similar approach, and thus brings somewhat similar benefits. But Marko has couple of other advantages - beautiful "concise" syntax option, and easy debugging at dev time as the Lasso bundler turns JS modules into equivalent files in the browser, complete with same line numbering. (It only works with CommonJS modules, but that's what I prefer to use anyway as it allows unit testing in Node without transpilation.) What advantages would you say Svelte has that would make me consider using it over Marko? Thanks.


I'd say it's a matter of personal preference as much as anything else. (Personally I'm not a fan of the compact syntax — I prefer just using HTML and CSS, but it's subjective.)

Performance-wise, you'll get great results with either framework. You mention debugging — Svelte creates useful sourcemaps, and the generated code is very readable anyway. Svelte has a few features you might find interesting (AFAIK Marko doesn't offer these, though I'm not intimately familiar with it):

* it can compile directly to custom elements * your styles are scoped to the component * declarative transitions * built-in global store (think Redux, but zero boilerplate) * useful element bindings (for e.g. customisable media players https://svelte.technology/repl?example=binding-media-element...) * computed properties. these are a lifesaver when you're doing a lot of complex reactive stuff

and so on. Also, I don't think Marko has an equivalent of Sapper.

Finally, while Marko is slimmer than the likes or React or Vue, there's still a runtime library you need to include on your pages. A typical Sapper page is about the same size as Marko by itself, before you've added any app code.


Thanks, sounds worth a closer look.


Just like anything, there's a small learning curve on syntax. Once you can get past your grievances (is HTML any less weird?), Svelte is extremely fast to build with.


I'd love an attributes syntax akin to riot.js, less noise and blends into html nicely.


What I don't get in all these frameworks is the desire to invent new and incompatible template languages with increasingly inane syntaxes. Oh. And, of course, with stringly-typed programming and magic binding rules.

Just look at Svelte's templates: https://svelte.technology/guide#template-syntax

Not only it's some custom implementation, it also breaks all assumptions about javascript code

   const counter = new Counter({
      data: {
        count: 99
      }
   });

   ...then {{count}}, or counter.get('count')
Why is it `counter.get('count')`? WTF?

Or just the section on computed properties[1]: oh, it's no longer even Javascript. It's some magic Javascript-like scripting language that gets injected with properties based on what you write.

Much like in Vue data, properties, and methods are magically hoisted up to the object breaking everything you learned about Javascript:

  export default {
    methods: {
      say: function ( message ) {
        alert( message ); // again, please don't do this
      }
    }
  };

  // and then

  import MyComponent from './MyComponent.html';

  var component = new MyComponent({
    target: document.querySelector( 'main' )
  });

  component.say( '' );
Why the hell is `.say` a top-level method now?

And so on and so forth.

[1] https://svelte.technology/guide#computed-properties


I’m not an expert can I can chime in and say that .get and .set are common in Ember data objects as well. No idea if it’s an ES6 thing or Ember thing though.


Those are Ember-specific, because ES getters and setters weren't widely available at the time of introduction. They just dropped < IE 11 though so they'll start to go away.


I think the major missing principle on this list is "streaming." (This is missing in Next, too.)

The server should send down parts of the page as soon as their data becomes available. Typically, pages have a "header" section that requires no data at all, and the server should render it instantly and stream it to the client. Then there's typically some fast-loading "above-the-fold" data; the server should render and stream that as soon as it's ready. Beyond that, there's other below-the-fold components, which may load/render more slowly. The server should render those as they come in.

Then, the magic technique is to progressively attach event listeners to components as they stream in, so you don't have to wait for the whole page to load to be able to interact with above-the-fold components.


Sapper supports partial streaming if you have data that needs to be loaded asynchronously (i.e. you'll get most of the <head> immediately, then other stuff later), but it doesn't currently do what you suggest (such as loading a <header> while <main> is pending). That's something that we intend to tackle in future, though hydrating pages with async data dependencies is a hard problem.


How the fuck something that is in early development advertises as military grade!


Getting new technologies to gain traction is not easy - regardless of their technical merit. I do admire the framework designer's P.T. Barnum like showmanship. He's got it down to a science.


Now add to this the ability to write native apps on the iPhone and Android and I'm sold.


Question:

> 1. It should do server-side rendering, for fast initial loads and no caveats around SEO

Is server-side rendering still an issue for SEO? Don't major search engines evaluate pages now so that SPAs can be indexed?


I can confirm that Google can crawl SPA's like any other page. But do note that they crawl using Chrome 41 (!) so make sure you have sufficient polyfills in place.

This also applies for AdWords, it drove me mad when Google kept insisting my site was not reachable, even though it was.

Only after installing a client-side exception tracking tool (like raygun or errorception) I found out the page was not rendering on the 'old' browser Goolge uses internally.


It's not as much of an issue for SEO, depending on what you're using to render.

The bigger problem is Facebook, Twitter, and the like are not going to run JavaScript to read your meta tags. It's not an issue for everyone, but if you're a publisher of articles then you're still going to have to render something from the server. Of course one can come halfway by only rendering the head tag contents and not producing a body, which is where most of the complication comes from.


Hey Rich, your hackernews clone doesn't perform all that well globally

https://testmysite.io/5a4c2cdf819876444fa127b5/hn.svelte.tec...

I really like Svelte! But I don't even serve my content from Express middleware anymore due to the above problem.

I just feel like you're under-estimating a developer's resourcefulness to make the content that actually is static a static page, and also over-estimating the number of new, traditional client/server web apps (at least by people who are probably in your target audience).

Static performance is hard to beat especially globally and there's a clear performance degradation once you get away from the epicenter. Believe it or not Svelte.technology's landing page actually performs noticeably more poorly coast to coast in the US (>2x worse to load in California than NY--150ms vs 500ms).

I'm not saying give up or anything, just saying it might be a good idea to talk to more JAMStack people :)


This is more of an ops thing than a framework thing. If you're making a purely static app, JAMStack is great. Sapper is solving a different problem.

Having said that, we will eventually add the ability to 'export' a purely static site that can be deployed to services like Netlify. Next.js has this, but they didn't launch with it either. Bear with us!


If you can "customise every aspect of the system" why does he spend a whole paragraph complaining about people who like JSX? Shouldn't the answer be, "and you can use JSX if you want"?


"This framework is 95% ideal, let's build another framework for my personal 5%."

This attitude is why we have a hundred different relevant frameworks right now. How about instead of a building a new framework, you contribute to an existing one? You even say that Next is close to what you want... why not help to make it better?

Relevant song about this issue: https://www.youtube.com/watch?v=Wm2h0cbvsw8


Because then we'd all still be using an improved Angular, or even something earlier. If that's what you want then you're welcome to make that happen for yourself.


Sounds interesting. He mentions that his gzipped size for the RealWorld project is about 40KB. There was a good comparison done with this project[0] in which one of the frameworks (AppRun) weighed in at 19KB. 19KB is pretty crazy. Has anyone here used AppRun?

[0] https://medium.freecodecamp.org/a-real-world-comparison-of-f...


Why on earth would I give a shit about SEO for a web application? The SEO is for the brochure site that leads people toward the webapp, not the webapp itself...


“Compiler as framework” seems like a really terrible unifying vision — a far more interesting framing I think would be “type system as framework”. It’s the difference between “this app is assembled by cobbling together a bunch of random domain specific syntaxes” and “this app was assembled by combining together resources with different well-defined, well-enforced constraints.”


I’m having trouble visualising what you’re hinting at. How do DSLs relate to static typing? Are you referring to writing all code in the same language (as in JSX + JSS)?


I've used many web frameworks over the years and I really wanted to like this Sapper technology, but for the life of me I just don't understand what problem it's trying to solve. Can someone give me a comparison of what comparable technologies it is replacing?


There isn't really anything comparable. Really. It doesn't include the framework in the end build. It's only in the original source. So it's a compiler, not really a framework, that compiles to code that's performant to specific tasks.


There hasn't been a new, shiny JavaScript framework out in a few days. Perhaps Sapper is trying to solve that?


One of the authors of Next.js here. I want to clarify that all the points against Next.js are virtues! They probably just arise from Rich not being so familiar with it (or not having looked at the examples/ directory extensively: https://github.com/zeit/next.js/tree/canary/examples)

1. I think this point is trying to say that we don't have special "mask files" inside `pages/`, which is an idea worth exploring. Right now, Next.js instead gives you full flexibility on how you want to match the routes on your server (as it should be). For example, `your-domain.com/xyz` can even first perform a database query before deciding what page to render it with (if any).

2. What the author points out as a weakness here is that you can handle server things directly in your server code (`server.js` using micro or express for example). That sounds like the right thing to do to me, and if you don't want that, you still can do something like:

  // pages/my-server-page-only.js
  export default ({ req, res }) => res.end('hi')
3. "To use the client-side router, links can't be standard <a> tags." This is by design. Magically overloading `<a>` to add client-specific behavior sounds like the days of jQuery or TurboLinks. If you want client-side behavior, wrap `<a>` in the higher-order `<Link>` component. There also very important properties of the `<Link>` like `<Link prefetch>` that are simply not part of the `<a>` standard definition.

Also noteworthy: the entire ZEIT Documentation is written in Markdown with React components. It's open source: https://github.com/zeit/docs

On size: the Next.js and React teams are both working towards really interesting solutions to ship only the code that's necessary, without resorting to templating as the main strategy. Our team actually spends more time on the 'compiler' parts of Next.js than the 'framework API' parts. (the API has barely changed since the initial 0.1 release in fact).

Additionally, since Next.js uses React as peer dependencies, you'll also get to do `yarn add next react react-dom-lite` https://github.com/jquense/react-dom-lite in the future.

Overall, I think Sapper is an excellent framework for Svelte users. I'm looking forward to seeing more Next.js-like frameworks.

For Vue users: check out https://nuxtjs.org/!


Thanks Guillermo. I hope the respect and appreciation I have for the Next team came across in the post.

My criticisms aren't based on misunderstandings, however. Route masking provides flexibility but it really does undermine the ability to navigate a complex app structure — Nuxt.js evidently reached the same conclusion, because they too have dynamic route parameters encoded in filenames.

The my-server-page-only.js results in a 'Cannot read property 'end' of undefined' error for me, because it seems Next can't distinguish between universal and server-only routes. If you have any examples of this working, please do share them!

The Markdown ZEIT documentation is cool, but doesn't really address the problem I was getting at with <Link>, which is that your content is likely to come from a database. (Incidentally, Sapper does provide support for <a rel=prefetch>, which works exactly the same as <Link prefetch>.)

> On size: the Next.js and React teams are both working towards really interesting solutions to ship only the code that's necessary, without resorting to templating as the main strategy.

I'm excited to see where that takes us. My conviction remains that templates will always provide more and deeper optimisation possibilities than JSX, but this is a competition in which ultimately we're all winners.


> Nuxt.js evidently reached the same conclusion, because they too have dynamic route parameters encoded in filenames.

nuxt.js applications don't need to rely on filenames for routing. there's a handy nuxt module for this -- https://github.com/nuxt-community/router-module


ITT: a complete misunderstanding of how a universal JS web app works.


Agreed. As a JS app developer I now feel more confident about my job security.


Why dont you guys use laravel when you need better SEO and server side rendering? It is fast, easy to learn, easy to deploy, etc.


Could webpack be replaced with something else? IMO webpack is a giant mess.


Have you had a look at Rollup.js (incidentally by the author of Sapper).


Is there a reason why most Javascript web frameworks, like this one, tend to ignore relational databases?

hmm, I don't even see a mention of how to integrate any datastore with Sapper or did I miss something?


It's just an Express app, so you can use whatever datastore you like — create a `routes/api/some-endpoint/[thing].js` file and export a request handler that interacts with your database of choice (or local files, or whatever).

Or serve your data from a different app altogether. There are so many possibilities here, depending on the requirements of your app, that it would be folly for a framework like Sapper to dictate how you solve this problem — instead, it gives you the tools to easily solve it yourself.


> Is there a reason why most Javascript web frameworks, like this one, tend to ignore relational databases?

Yes, because these are frontend frameworks and that is a backend concern.


I could be wrong, but this doesn't look like a front end framework.


An ideal web app framework would not be in JavaScript...


Do you have any other frameworks in mind? Ones that easily and concisely hook into the DOM APIs provided by browsers? One that hot reloads in dev? Or maybe just accomplishes even a handful of the 11 targets listed at the beginning of the article?

Because I don't know of any. Except maybe the new Rust framework, Yew. Which seems very interesting.


This ideal framework would probably work in an ideal browser, without the DOM. Doug Crockford put it nicely, today's browsers are a "vast source of incompatibility, pain and misery".


Perhaps you're right, there may not be one. However, I think the point the OP is trying to make is that JavaScript is a shitty language and it would be better to invest some effort in replacing it with something better than to continuously try to "fix" all its inherent shitty-ness over and over again to little avail. Sure, JavaScript isn't as shitty today as it was 10 years ago, but it still isn't a well-thought-out, well-designed, or consistent language.

For examples of JavaScript's shitty-ness, I'd point to its various function declaration styles and their disparate effects on the code you write, its wishy-washy scoping, namespacing, the number type's precision problem, NaN being a number type, the ambiguity of this, just to name a few.


yes. well put


FWIW, there's Dart, but at the end of the day it's still just JS. This isnt to argue your point either, but one possible answer to the question(s) you bring up.


It's fine to hold that opinion, but in order to usefully contribute to this conversation you need to give a concrete reason why.


Here's my reason, rooted in practicality and not taste: For me, an ideal app framework would cover both web and native apps. All the major native app platforms rely at least partly on AOT compilation. Android, in particular, is based on a runtime that resembles the JVM at the API level and takes JVM bytecode as input at build time. So for me, the ideal app framework would be in a language that's amenable to compiling ahead of time to efficient JVM bytecode, as well as efficient JavaScript and, for good measure, efficient native code for iOS and Windows.

I think Kotlin is becoming a good choice for these criteria. If you want to follow Mr. Harris's framework as compiler approach, Scala may also be a good choice, because of all the compile-time metaprogramming facilities it has. I don't know how Scala Native stacks up beside Kotlin Native, though, and I've read that Scala 2.12 requires Java 8, so that's probably not a good choice for downlevel Android support.


Have you looked at HaXe?


Is there any way to use Sapper.js with React, Vue or anything framework? I find the premise behind Sapper interesting, but I just don't see the appeal of Svelte, where they are basically reinventing HTML syntax.


Svelte is the premise behind Sapper. If you want to use React, use Next; if you want to use Vue, use Nuxt; if you want that 7kb Hello World (instead of Next's 204kb or Nuxt's 175kb) then you have to adopt a more efficient approach.

We're not reinventing HTML, we're using HTML. And CSS. ('Reinventing HTML' is a far more fitting description of JSX, and then there's CSS-in-JS...)

Doing so is what allows us to statically analyse your app and compile it to a) a tiny client-side JS payload, and b) an incredibly efficient server-side renderer.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: