A PWA does not equal push notifications. Those are completely optional and should of course only be added where that makes sense. And even then, you'd have to agree to receiving them first.
> ...or ability to visit your website offline
Fine, you don't have to. In that case, the service worker just improves page speed and reduces data usage.
Hmm, how does it do that? I don't see what advantages it may have over caching and HTTP/2 push...
No it's not. In Apache and Nginx (and I assume any other half decent web server) it is very easy to specify different cache times for each file extension. Same on shared hosting (.htaccess)
apache yes, but what about nginx? And there are many other ways, how to host your site (Amazon S3,...).
Also by unpractical I mean you have to have access to the server (ssh...). IMO it should be more concern of an app then server. But I'm not advocate for service worker for static resource caching (for websites).
EDIT: still talking about regular websites? First visit or repeated visit? Most website already can take advantage of plain browser caching and service worker is IMO hard to sell here.
For example if you visit the admin login page, it can trigger a fetch of the assets for the admin page, then stuff them in its own cache.
Also a service worker would add additional process in Chrome and require 30-40 Mb of RAM. I don't want service workers in my browser.
Well, web standards have always been this broken and illogical as long as I rememeber.
Chrome debug tools work just fine.
> How to see if any of your visitors cannot load site because of error in a service worker?
The service worker fetch event is a promise, so you can catch any errors encountered and tell the client to go fetch the remote version instead. You can then do whatever analytics stuff you want with the error.
> How to decide which pages to preload for offline reading?
Well, who knows. But it's flexible enough to let you decide yourself - the last attempt, AppCache, forced you to decide upfront. With a service worker you could in theory use a user's browsing history to know what pages they are likely to look at. Or you can actually create responses manually in the worker, so you can dynamically generate an entire page locally on the device. It's very powerful.
> Also a service worker would add additional process in Chrome and require 30-40 Mb of RAM
This part I can't argue with. I mean, Chrome already uses a different process for every tab, but more resources is more resources.
I don't see service workers as over engineering, I see them as a low level API. We just need some abstractions.
> The service worker fetch event is a promise,
By the way promises is another poorly designed technology: they catch unhandled errors and do not report them in any standard way .
Why is that a problem?
> By the way promises is another poorly designed technology: they catch unhandled errors and do not report them in any standard way
If you're using promises there is really no reason to not be handling your errors. Catching unhandled errors is by design and it works great. It's absolutely trivial to do.
And, as the SO post you've linked to points out, you can catch unhandled promise errors using window.onunhandledrejection. It's not like it's that dramatically different to window.onerror. Promises might not be perfect, but they're better than JS callback hell in absolutely every way.
> Catching unhandled errors is by design and it works great.
I don't think that it's great because by default I would prefer to terminate the application if there is an unhandled error or rejection. This is called "fail fast" principle. What promises developers suggest is to continue running the application even if it's broken.
JS isn't today truly asynchronous in most usage today, but they are planning for that future (present if you count Service Workers and Node/Electron).
Try googling "fail fast" if you want to see more reasons to use this principle.
> Tearing down an entire application immediately due to an asynchronous exception has the possibility to leave the system in a deadlock or worse.
It should not. Because application can crash anyway - for example, if there is memory access violation or out-of-memory error. And synchronous applications crash on uncaught exception, I don't see why asynchronous applications should not.
Sure, fail fast is very useful in debuggers to help a developer find and fix problems. But this is why we have debuggers. Debuggers are a terrible user experience, we don't need to force every user to live in a debugger and decide themselves which errors matter.
Not every exception is "fatal" and "might look like it is working ok" is often good enough in the pragmatic real world where you can't control everything.
Also, just because you've never accidentally caused a machine to BSOD on a deadlock or catch fire from an improper halt in a "fast fail" of multi-threaded code, that just means you are one of the lucky ones. There are disastrous consequences to bad multi-threaded code during "fail fast" teardowns of threads not privy/cause to the error. There are enough nightmare scenarios to keep a multi-threaded programmer up at night.
I appreciate your concern that developers sometimes ignore errors they should fix instead or handle more appropriately. "Fail fast" in a debugger works to help that. "Fail fast" in day-to-day operation isn't the right solution, at the very least for making users possibly hate you because you'll never properly handle 100% of exceptions.
Also I remember reading somewhere that the earlier you find the error the cheaper it is to fix it.
> why did my app just disappear, did I do something wrong?
Usually operating system reports that there was an error in the application. If the application crashes often, the users will complain to developers and they will fix it.
> Debuggers are a terrible user experience, we don't need to force every user to live in a debugger
You should test your app properly before releasing it.
> There are disastrous consequences to bad multi-threaded code during "fail fast" teardowns of threads not privy/cause to the error.
I cannot agree with this. Usually the OS releases all locks, closes all files, frees memory when the process terminates. I don't see what can go wrong here and how you can get a deadlock because some process has terminated. Also if you use locking you should add some kind of timeout. For example, MySQL has lock timeouts and can even detect and fix deadlocks.
> "Fail fast" in day-to-day operation isn't the right solution, at the very least for making users possibly hate you because you'll never properly handle 100% of exceptions
The users will probably hate you more if the app will silently hang or show incorrect data.
Why make a new non-standart event instead of just throwing an error and handling it with default error handler? That is poor design - because initially there was no reporting of unhandled errors at all and only later a workaround was added.
It's a different handler because an application may need to know the difference between a synchronous error and an asynchronous error and handle them (dramatically) different.
I think the reason is because some developers have a kind of exception-phobia and they prefer to hide errors (make developer check for them explicitly) rather than let them bubble up.
I see this in other platforms too. For example, in .NET if a background thread crashes because of unhandled exception then nothing happens to the main thread, no error is reported. The developer must check for an error and of course most developers won't care about this.
Another example is how error reporting is done in a browser. JS errors are never shown to the user so if a rich JS app has an error the user won't know about it, the app will just stop working or show invalid data.
But I am somewhat skeptical of the implications beyond SPAs, which is that, across the entire Internet, basically every static website in the world should be converted to a PWA for an optimal mobile experience. Doesn't that seem backwards? Shouldn't mobile browsers have a way to "just work" and give a great experience for static websites, rather than changing all the sites? Shouldn't we only need something like PWA for rich SPAs?
A news site might see huge value in implementing push notifications, whereas a shopping site might not. Messaging sites may see huge value in background sync, but it will be of little value to others. So on, and so forth.
My main complaint is that the biggest win for PWAs is simple caching - you can explicitly choose which resources to add to a cache instead of relying on the normal browser cache. This becomes necessary on mobile because the browser cache is very small and is emptied far too often - I'd rather them solve that problem before forcing everyone to make a service worker just to ensure their CSS and JS is cached successfully.
HTTPS or Service Worker caching is not something a mobile browser can do for you by default.
Consider it a suggestion to developers who are tempted to implement cool new tech on a site just to tickle their own itch.
> The Pentium III processor's Internet Streaming SIMD Extensions Instruction Set has been a hit with software developers resulting in numerous optimized Web sites and software applications available today.
And you're ignoring huge benefits like offline availability, push notifications, storage...
Good news is that a service worker can't do that. It gets automatically shut down after a short period of time. It will be woken up again in response to push notifications, which you must request permission for, or in response to a new user event.
> Maybe my users pay for their data, or maybe they have a poor connection.
Which is why providing greater offline capability is a good thing, not a bad thing.
"Application" in PWA is best thought of as indicating the user experience on a mobile, with the ability to add to homescreen without downloading via an app store, and offline capabilities. Where PWA shines is website > AMP > PWA. See https://www.google.com/search?q=myntra+kurtis on your mobile for a great example of this transition.
For personal static blog, it doesn't take any effort to be fast; just don't bundle lots of bs stuff with it.
btw, there are still optimizations that can be done for author's site. See Google PageSpeed
Make it too long and you may have trouble to find the next line. Make it too short and your eyes will suffer from traveling back too often.
When I come upon a site that uses too much horizontal space for me to read comfortably, I just leave.
I have a PWA that creates html/css in the build process.
If JS is enabled, it acts as an SPA.
If JS is disabled, it's a simple static site.
If ServiceWorker can be used, caches the html/css I built which both the SPA and static site versions can use.
If no SW, then fallback to AppCache/LocalStorage/etc.
The hardest part was finding a CSS framework that doesn't use JS and still can do menus/etc nicely.