Hacker News new | comments | show | ask | jobs | submit login
Regressive Web Apps (adactio.com)
107 points by thekingshorses 424 days ago | hide | past | web | 80 comments | favorite



I'm so tired of people using "it doesn't work without JavaScript!" as an argument. To me it sounds as relevant as saying "it doesn't work without HDMI!"

What is so bad about JS that you just can't use the web with it enabled? Grow up.

The web is a platform that is evolving far beyond its original intent. In a few years its going to be a binary delivery platform. And that's great news. The idea that the web needs to stay the way it was designed in the 90s is what's really regressive.

Here's what matters about the web:

- Linking

- Standards for Web APIs

- Content delivery protocols

Everything else - as far as I'm concerned - is a transient feature which will eventually be superseded.

The mentality that nothing should change about the web is incomprehensible to me. I just don't get it.

Edit: Inevitably, someone will tell me that the web is faster without JS, and another might claim accessibility as a problem.

- A faster web is made possible by faster devices on faster networks. Current constraints are far less relevant when we're discussing future technology. For now, yes page size is a problem but I absolutely wouldn't want that to shape future technology. ISPs have the power to improve their services as required.

- Accessibility is a standards problem, not a content type problem. Accessibility software should be expected to adapt to new technology like anything else. As long as we provide the tools for these platforms to succeed, the type of content that is being served (whether it's plain text, JS, or binary) shouldn't affect anything.


https://www.w3.org/2001/tag/doc/leastPower.html

From the perspective of a web developer or publisher, they of course want as much power as possible. But this is really a tradeoff, directly taking away power from end users - the accessibility you handwaved away, as well as things like adblocking, user-styling, caching, and every other unenumerable concern of endpoints.

I'm not saying that a delivery system for opaque executable blobs isn't the inevitable fate of the web, just that this shouldn't be viewed as progress but rather decay.


> the accessibility you handwaved away, as well as things like adblocking, user-styling, caching, and every other unenumerable concern of endpoints.

You are speaking of things that can and should be addressed with APIs provided by the browser.


Are you thinking this API would be extension-facing, or page-facing?

For the former, then the browser still needs to be able to understand the page to implement this API functionality. Turing-completeness is incompatible with this understanding.

For the latter, then that requires page authors to explicitly opt-in to those mechanisms. Obviously they won't do this when incentives are opposed (eg ad blocking). And even in the cooperative case, would require pages to anticipate all features viewers would require.


I disable JSbecause, on a lot of sites, I'm only there to read words on a page and those words come up instantly without JS and in as long as 30 seconds with it.

Why do you think Google created AMP (which is just a fancy way of telling devs to stop using JS for text sites)?

Look, there are plenty of site for which JS makes a huge difference - proper web applications. For every one of those site there are 9 that load half a MB of cruft so they can animate their stupid banner.


I'm not saying text content doesn't have its place, but what I am saying is that unless you're absolutely certain your users aren't going to be using JS (hint: that's effectively no one) then it's just not worth your time to accommodate those users - especially when they optionally disable JS, AND you have to degrade the experience for the rest of your users in most practical applications. How often do you design features for 1% of your users?


>AND you have to degrade the experience for the rest of your users in most practical applications

I would say the opposite. The absolute majority of developers who insist on requiring JavaScript with no option to gracefully downgrade on pages that do not need it are the ones degrading the web experience.


A lot of it is about performance, if you need JS to fetch the content of your site after the SPA or Shell has loaded then you are incurring a significant amount of overhead for something that could have been streamed from the server and rendered as it was parsed by the browser.


Performance will come as the need develops. Yes it's faster to render something on the server right now, but that may not be the case in the near future. The concept of loading something from the server is also only accounting for a very specific use case. These days most websites will require more data from the server after the webpage has been loaded anyway.

Server rendering simply wont make sense for some binary applications, such as games.


Playstation Now exists and is server rendering for games: https://en.wikipedia.org/wiki/PlayStation_Now


All the time, as 1.2% of my monthly users is 1.15 million people (hint: that’s effectively a lot of people). People will be accessing my content and using my services with Javascript deliberately off or situationally unavailable, and they need to be able to do so. It’s defensive design.


From my experience having JS turned off, most sites render the text I want to read in a readable fashion sans JS.


Is it really true you don't need JS for AMP? Checkout Washingtonpost.com/pwa


AMP pages can’t include any author-written JavaScript. Instead of using JavaScript, interactive page features are handled in custom AMP elements. The custom AMP elements may have JavaScript under the hood, but they’re carefully designed to make sure they don’t cause performance degradation.

https://www.ampproject.org/docs/get_started/technical_overvi...


Thanks for sharing. How do you go about taking a current website and building a PWA? Do I have to change my website architecture drastically?


I'm interested what if my site already is 'amp' - do I really need to have a duplicate page? And add even more markup?

I don't understand amp, Google already has loading time as signal, just increase it even more.


If your web page doesn't work without javascript it is probably doing something between malicious (Like refusing to let me download an element displayed on a page) to staggeringly malicious (Like tracking me with analytics) to petty (Like dictating how I read a page).

It's not that nothing should change, it's that change should make things better. I firmly believe pages made better by javascript are such a small minority as to be an rare exception, but overall the web is so much better with that 'feature' disabled.


> I'm so tired of people using "it doesn't work without JavaScript!" as an argument. To me it sounds as relevant as saying "it doesn't work without HDMI!"

I don't understand your reasoning; I don't have any HDMI equipment (except my RaspberryPis, which are headless servers anyway), so I certainly would avoid anything that's HDMI-only.

Just because you, at the moment, only use the Web for a few pre-approved purposes (e.g. viewing rendered pages) via the handful of user agents which support Javascript, on operating systems supported by those user agents, on hardware supported by those operating systems, which is powerful enough to run it all, that doesn't mean everyone else is. Some are less fortunate, as they have older, or less-powerful equipment. Some are more fortunate, as they are using the Web in more imaginative and novel ways than you are.


Okie doke, you are missing out on a lot of HDMI-equipped hardware, and if you don't use JS, you are missing out on a lot of JS features.

Web pages can be rendered any which way you please, it's no skin off a developer's back if you don't have JS enabled for their JS project.


> Okie doke, you are missing out on a lot of HDMI-equipped hardware, and if you don't use JS, you are missing out on a lot of JS features.

I don't specifically avoid HDMI, I've just never encountered a situation which involved HDMI. Also, I hear it's riddled with DRM, and I don't see the point upgrading when it seems to be becoming obsolete anyway (DisplayPort? ThunderBolt? Newer USBs? etc.)


I have no issue with JavaScript itself. I have issues with what web developers do with it:

- Hideous wastefulness of resources, causing constant unnecessary repaints, enormous RAM usage, etc

- “Experiencejacking”: things like scroll hijacking, back button hijacking, etc

- All the privacy invasions that JS enables

These wouldn’t be issues if sites suffering from them were the minority, but they’re not. Most sites have mountains of poorly optimized JavaScript that gives zero thought to performance or resource consumption. Once it works, it’s shipped.

The fact is that front end web dev culture as a whole places very little value in things like performance, battery life, resources, and ultimately, respect of the end user’s desires, and that’s a huge problem. As long as the whizz-bang shiny flashy checkboxes are checked, it’s all good.


The problem, as people see it, is there is a whole space of websites that use features they don't need. Think of all the 1pageapp content sites.

To borrow your comparison, people who are bothered by what I describe above see it like a website saying, "Hey, we won't work on your device unless it's HDMI because of the design choices we made!"


The difference now is that most people are using browsers that will automatically update in order to work with new features.


And someone will mention how a full-scale language inside a browser is the hugest attack vector of all. Soon as a binary delivery platform - yay!


It's not a real environment though. It's a sandboxed environment with limited, user-authorized APIs.


Nowadays even Wikipedia has a section on JavaScript sandbox implementation errors. Even without taking JS into account, browsers, colossal beasts they are, have had a history of security vulnerabilities in HTML, CSS and image decoding routines. With JS added... again, Wikipedia says it best: "JavaScript provides an interface to a wide range of browser capabilities, some of which may have flaws such as buffer overflows." Ergo, no amount of "sandboxing" will ever save you from trouble.


I'm a JS evangelist for sure. I love the language, but I'm not a true believer. Believe it or not, I keep JS turn off on my mobile browser because I enjoy instant page loads. I remember the first time a coworker asked me "but what if they have JS turned off?" I thought "what a dinosaur. Who doesn't have JS turned on?" Obviously I had a change of heart on the subject.

I never write JS > ECMAScript 5. Why? Because I want my apps to work in as many places as possible and I don't feel like arrow functions are worth breaking backwards compatibility. I know there are those who will disagree. I'm all for pushing the web forward, but I feel like, in the process, some of us have lost sight of what the web is ultimately about.


> I never write JS > ECMAScript 5. Why?

That's a bold statement. Why EcmaScript 5 and not JavaScript 1.4 for that matter?

Programming languages change and evolve. EcmaScript 6 introduces a lot of features, including some that greatly improve the readability, maintainability, and cleanliness of the code. Sure, right now you may need to hold back on some features if supporting legacy Internet Explorer versions is your goal and you don't use a preprocessor such as BabelJs, but at some point the only browsers that don't support ES6 will be browsers that are unsupported by their vendors and inherently unsafe; so why stick with ES5 in particular? Or are you simply waiting for the moment all current browsers support ES6?

I get not wanting to use bleeding edge features supported only in the latest developer edition of Chrome or Firefox, but at some point ES6 is supported in 100% of the browsers people can safely use, and a few years after that ES7, and so on.


> That's a bold statement. Why EcmaScript 5 and not JavaScript 1.4 for that matter?

Because most devices will run most of ES5, very few devices will run any ES6.

Now there are transpilers sure, but I remember a time Javascript was about simplicity and not relying on a complex grunt pipeline + Babel. Now JS tooling is more complicated than Java's only with JSON in place of XML. The irony.


I think for most people the time you save using ES6 features is more than the time it takes to set up a build system, at least for larger projects.


I would have to disagree. There's no way saving a few keystrokes on an arrow function or being able to use 'class' will ever compare to understanding, let alone debugging, gulp streams. IMO JS tooling has gone way overboard, but in code, as in any artform, everything is subjective.


I think you might be misreading the parent statement as "I will never write JS > ES5".

I too find that the easiest to implement features are (not terribly surprisingly!) the least interesting; they are not worth the terrible friction that I'd see in my current organization. I have a hard enough time trying to get folks to write good test coverage and understand the gotchas of writing async code everywhere. So for me, there's way bigger fish to fry than (exaggerating a bit here) saving a few characters with arrow functions.

But that's still not to say I won't ever use it.


you are probably too young to remember the time when using <script type="javascript1.2"> was bold and bleeding edge. (probably got the syntax wrong, because, well, that what happens to memory when you are not too young)


I feel like JS is an expected part of the web just like HTML and CSS. How can you build engaging web applications if you are not even allowed to rely on standards like JS (e.g. if the user turns off JS)?


> I feel like JS is an expected part of the web just like HTML and CSS. How can you build engaging web applications if you are not even allowed to rely on standards like JS (e.g. if the user turns off JS)?

The web is not about 'engaging web applications'; the web is about webs of interlinked documents. Every time you require JavaScript to display a simple document; every time you load images with JavaScript instead of <img>; every time you replace an <a href=> link with a JavaScript event handler; every time you fail to even link to a page; every time you use JavaScript-loaded fonts to display symbols: you break the web.

'Web application' is a misnomer: it should be 'browser application.' Despite my loathing for documents (e.g. blogs, articles &c.) which require code execution in order to be read, I don't mind browser apps where they make sense. Google Maps makes sense: it doesn't bother me that it doesn't work in eww, or links, or with NoScript.

Blogger doesn't make sense: there's absolutely no legitimate reason for it to show an empty page with JavaScript turned off: Blogger is breaking the web. imgurl's failure to show all images, and Cracked's failure to show any images, without JavaScript makes no sense: they are breaking the web.

People who break the web should be deeply ashamed.


> the web is about webs of interlinked documents

If this is true, and my browser can follow a link to a Blogger page, displaying the intended document using universal browser features, then the Web is not broken.

This kind of evangelism makes no sense. The whole idea that "the Web is about documents, specifically" was shattered as soon as Javascript was added over two decades ago. It's no longer about that idea and hasn't been for a long time. Interlinked resources, yes, and documents became a subset of that idea in 1995.

The desire for document-type content to display without JS stems from a strange desire to see the technology used how it was invented in 1990. The Web was hodge-podge of technology then, and it's even more so now. If you intend to pick and choose what parts of the hodge-podge to use, you will find yourself frustrated forever. And you gain no advantage by arguing "but I want the _original_ hodge-podge to work with the type of content it was design for!".

JS _is_ the Web just as much as HTML because all (Edit: popular) browsers support it. As soon as that happened, compatibility with the Original Hodge-Podge was dead, permanently. For better or worse, that is the reality.


> If this is true, and my browser can follow a link to a Blogger page, displaying the intended document using universal browser features, then the Web is not broken.

Your second premise doesn't hold for all browsers: links cannot display Blogger pages; Firefox with NoScript cannot display Blogger pages; the Tor Browser with high security cannot display Blogger pages.

The Web is, fundamentally, about resources — documents — and their state, not about behaviour.

> JS _is_ the Web just as much as HTML because all browsers support it.

No. They. Don't. eww doesn't. w3m doesn't. lynx doesn't. links doesn't. elinks doesn't. Firefox with NoScript doesn't. The Tor Browser with high security doesn't. That is the reality.

The reality is also that allowing JavaScript exposes users to severe privacy and security threats. Disabling JavaScript is the only way to control those threats.

Every site which requires JavaScript to function is like a serpent tempting a user into giving up his privacy and security. That is the reality.


> Firefox with NoScript cannot display Blogger pages; the Tor Browser with high security cannot display Blogger pages.

These configurations disable universal features, so the premise holds.

> The Web is, fundamentally, about resources — documents — and their state, not about behaviour.

Yes, in 1990. Also, you choose to conflate "resources" with "documents".

> w3m doesn't. lynx doesn't. links doesn't. elinks doesn't

These are browsers that choose parts of the standard hodge-podge and they suffer for it by not being able to display all pages.

_

I'm not arguing the situation is great or that you should like it. Hell, I don't even like it. Your criticisms of JS are valid. But your expectations of how the Web should work are _broken_ and they would have been broken 20 years ago. Whether it's 1996 or 2016, the only way your reality could come true is if there was an HTML-first programming style enforced universally. It's so impractical it's barely worth mentioning, so why evangelize?


> These configurations disable universal features, so the premise holds.

> These are browsers that choose parts of the standard hodge-podge and they suffer for it by not being able to display all pages.

So, the Web is not broken because all browsers which use a feature are able to display pages which use a feature — anyone who disables that feature is responsible for breakage due to pages using it, and anyone using a browser which doesn't support that feature is responsible for choosing such a browser.

Your logic would justify every vendor-lock-in attempt from Apple & Microsoft; it would support the idea that all the Internet's IE.

> But your expectations of how the Web should work are _broken_

No they most definitely are not. My expectations of how the Web should work are based on mathematical truth. REST is symbolic manipulation: it's correct in the way that only math can be.

> Whether it's 1996 or 2016, the only way your reality could come true is if there was an HTML-first programming style enforced universally.

Not HTML-first, resource-first. Figure out what your resources are, figure out how they are related, figure out what behaviour makes sense, then wire it all together. Once you have all your resources (images, HTML, whatever) working well, then feel free to wire it up with JavaScript.

HTML's power is that it is declarative, not imperative. It shares that character with plain text: when you read my words you do not give me root access to your brain. No-one should be forced to execute unknown, newly-served code in order to read a document.

When you require JavaScript, you break the Web. Please don't.


> Your logic would justify every vendor-lock-in attempt from Apple & Microsoft; it would support the idea that all the Internet's IE.

Except in this case the "vendors" are the makers of every popular browser and the standards body governing Web technologies. Not exactly apples-to-apples.

> My expectations of how the Web should work are based on mathematical truth. REST is symbolic manipulation: it's correct in the way that only math can be.

Right, a programming style. Your 'mathematical truth' is arbitrary given current web standards. The lack of it does not break interlinking, which, by your own admission, is what the Web is about.

> Not HTML-first, resource-first. [...]

You didn't actually address the fact that you would need to have everyone follow this style. You expect the Web to work in a way that can never be enforced; that is a broken expectation. I agree, this is the way it should be done but it's completely obvious it won't ever happen. The incentive to follow that style is no where near enough to generate any sort of defacto standard. So why evangelize?


It also comes from a desire to use tools that are not browsers on the web. To archive, to search, to display them nicely when sharing them, ...


I agree.

Before the prevalence of JS you could simply download the text. But now, much of the text interpreted. So if you wish to get all the text, you must also do the interpretation step. Which essentially means running a headless browser to pipe it to you. And it's simply not practical to tell your text sources to avoid using so much interpreted text. They're using Web standards.


I agree with your distinction between browser application and the web. Its very frustrating to be bombarded with tons of javascript that introduce nothing of value; usually advertisements and petty fanciness.

But that being said, ive been developing a heavy javascript SPA ( Single Page Application ) and have been writing javascript that is going to be very useful to (certain types of) users.

Im more than happy to download a large bundle of javascript (and css) if im getting an application (and preferably not on my phone)

And in defense of SPA's which dont even have web in them, good ones are a portal to a location on the web (a model type and id) loaded usually via json with html re-rendered on the client. And even better ones have history implemented. So they can feel very like the web...given js is enabled(!)

But of course I understand the "why reimplement what browsers already offer?" SPA's feel almost indistinguishable from a desktop app if done right...

What are your opinions on SPAs?


> What are your opinions on SPAs?

I think that some of them make sense, but many many many would be better implemented as RESTful apps instead. Once the REST layer is written and works without JavaScript, implementing an enhanced version which requires JavaScript and minimises redownloading of new data is relatively easy.

Imagine a chat app. The pure-HTML version could serve a chat message at /chats/{partner}/{message-id}, with a prev link to the previous chat; it could serve a chat sequence at /chats/{partner}/{message-id}?history={length}; it could redirect /chats/{partner} to the most recent message in a sequence. A form could easily enable POSTing a new message to the chat.

Once all that's done, it's dead-easy to write some JavaScript which ties it together in a single-page app. Instead of requesting each message as text/json, the JavaScript client could accept application/json (or another format, if you like). It could subscribe to a WebSocket or server events. It could do anything — but the basic HTML-based app would still work, and be usable from a command line, from a simple phone, from anywhere.


I agree you're getting the most accessibility by making a blog that renders on the server. But rendering front ends with javascript shouldn't be considered breaking the web. It's a tool that's available in the browser, and many use it to try and make a better experience for their overall users. I think it's a shame we have the "first load no content" problem, but developers are doing this with good intentions. Making UIs respond faster, reducing back & forth network requests, and trying to make more interactive content on the web.

There are some warts we need to sort out and get better at, but I still see it as progress.


> rendering front ends with javascript shouldn't be considered breaking the web.

If you want to serve an empty, broken page as a first impression, you can. I strongly recommend against it because it makes your site look half-finished, poorly designed, and generally unprofessional.

> available in the browser

That isn't always true. You don't know anything about the browser a priori.

> make a better experience

Of course. Javascript can add nice features, which is why you progressively enhance the page to add features when the underlying requirements are available. Do you skip the check for NULL when calling fopen(1)? Checking for errors is always important. On the web, this means you should gracefully handle problems like missing javascript (intentionally or otherwise) or other resources.

> Making UIs respond faster

Requiring two additional HTTP requests (one to load javascript, and another to load the actual page content) isn't making the page faster. After the first request your javascript will be available most of the time for partial loads in any case.

> interactive content

That is rarely a good idea, but it's your page so do what you want. Progressive enhancement doesn't affect this at all.


I think you and I are talking about the same thing. I was more responding to the extreme circumstances the other person posted. I agree that when you're making public facing content, it should render on the server. The faster UIs is true when you don't have to do full page reloads to load new content, but just grab serialized data from the web server instead.


Do you develop, professionally, for the web?


> But rendering front ends with javascript shouldn't be considered breaking the web.

If it doesn't work without JavaScript, it's breaking the web.

I have absolutely no problem with progressive enhancement. I have absolutely no problem with choosing to enable JavaScript when it makes sense. I have a problem with requiring a privacy-destroying technology in order to consume content.


I just wanted to say this is a really fascinating comment, and it's given me a lot to think about. It's an interesting perspective that I haven't run across before.


> I feel like JS is an expected part of the web

You feel wrong. I enable it when I have to, only.

> How can you build engaging web applications

Not my problem. I want quick loading sites with as few malware vectors as possible.

You should be building web applications which facilitate the transfer of information. You don't need javascript and all the other crap people are using these days for that. I mean, if you're all about pure eye candy and doing..I dunno, mandelbrots or something in the browser, then yeah, you're going to need javascript. But it's not going to work on mobile. Hardly anything like that works on mobile, which is fine by me; I do most of my browsing there.

To be honest, I think there's a market for a firefox addon which removes links to sites which don't work on mobile/require you to have javascript enabled to save me the effort of clicking back when i arrive at am empty page or when the screen is dimmed and all I can see is the corner of some dialog I have no interest in zooming out to look at.


A few months ago I wrote a web-app that lets you pick an image off your computer, and convert it to a cross-stitch pattern.

I did it using javascript entirely client side, so it works offline, it's secure (as it doesn't send the images anywhere), and it is fast to load (letting you change settings and get the new cross-stitch in the blink of an eye).

With your idea of doing things, i would need to pay for a server (and would probably need to then charge my users who wouldn't pay, so it's now DOA), it would be significantly slower, it would be less secure/private, and wouldn't be able to do things like auto-change the size of the resulting image depending on the screen size of the browser (important for mobile! don't want to overload the browser with massive images).

That needs javascript, and for you and others to keep acting like anything that uses javascript is a toy that shouldn't exist is arrogant and wrong.


You should be shipping that as a browser extension. As it stands it's not good as a website, but it meets every criteria for a good extension.

It's even the kind of thing that passes for a native application in Chrome OS.


But if i shipped it as a "browser extension" i would need to make multiple versions for each browser, and it wouldn't work on most mobile devices...

And even as an app it's not that good. It's very single-purpose, there aren't really any "settings" that need to be saved, an app for this would be overkill.

As a website, it's quick to find on any device (desktop, laptop, phone, my damn TV!), it's extremely fast to install (if it takes more than a second to load on home internet i've done something wrong), has little to no footprint on the disk of the device, and does one thing and does it well.


> How can you build engaging web applications if you are not even allowed to rely on standards like JS (e.g. if the user turns off JS)?

I think it skews the conversation to talk about JS being "turned off", rather than unavailable, as it assumes that every user agent (browsers, screen readers, crawlers, CLI commands, etc.), even those cobbled together 5 minutes ago to do one quick job, is capable of running JS, has a DOM, is up to date, implements various APIs, that those APIs even make sense (what's the screen resolution of wget running in an SSH session?); and that the user has manually told it not to.


>How can you build engaging web applications if you are not even allowed to rely on standards like JS (e.g. if the user turns off JS)?

You can, but you an't force the user to interact with your 'web application' in the way you prefer. But you could never rely on the user's browser settings being optimal for the experience you wanted to provide anyway.

If you want absolute control over what the user sees and doesn't see, then build a native app. The price you pay for the ubiquity and convenience of web applications is putting control over the interface in the hands of browser vendors and end users.


> How can you build engaging web applications if you are not even allowed to rely on standards like JS (e.g. if the user turns off JS)?

What do you do for accessibility? Screen readers etc?


>I never write JS > ECMAScript 5. Why?

Because you started too early during the web's rise, and are now too old for that...

>Because I want my apps to work in as many places as possible and I don't feel like arrow functions are worth breaking backwards compatibility.

They are not, since everybody uses them with Babel.


> Because you started too early during the web's rise, and are now too old for that...

ES6 has some nice features -- all other things being equal I'd chose to write it over ES5. Or even ES3, since it fits with that sloppy ageist accusation you tossed off.

As soon as you say "Babel" (or another transpiler), though, all other things are not equal. I've already chosen to work with ES6 anyway in some circumstances, but honestly, when it comes down to it, the benefits are marginal enough over ES5 that there's a reasonable case that anyone already effective with ES5 doesn't need to transition.

And I might even go so far as to speculate that a programmer who is unable to be effective with ES5 might simply be the kind of programmer who can't really be effective whether they'd be working with Python, Ruby, Go, C#, whatever... but that would probably be tenuous bullshit on par with assuming anybody who doesn't want to add a transpiler to their toolchain in order to do essentially the same things they can do without it is Just Too Old.


The problem is not with the individual programmer. One can make wonders with Commodore assembly too. Of course they can do stuff in plain JS.

But they'd be reimplementing the wheel a lot. If you want to take advantage of the JS ecosystem as it stands in 2016, you need to use its tools.

E.g. just consider using React with JSX -- you already need a transpiler for that.


I always say that URLs are the best invention since electricity and cars. Just a few bytes of text but convey so much information, and are universally supported by virtually all software. This is huge. Don't take it away from the people.


Even with a completely Javascript-dependent SPA, the URL is hugely valuable for driving the state of the app. And it's trivial to get your hash URL to work with the browser back/forward buttons, and to allow users to bookmark places in your app via the hash URL.


URLs are not going away - like you point out, they are a reasonably good way for software to pass around a few bytes describing a resource location. Their usefulness for humans, particularly within a single site/web app, is less clear - e.g. if we ignore the domain name (which would be constant across a web app), this comment thread is described by 'item?id=11770774' - it doesn't mean anything, and so arguably is not worth displaying anywhere within the HN 'app'.


It absolutely means something: it's an article/thread ID number.

Now, that's not as transparent to grok or as high resolution as something like the date and title path schema that's become popular, but it's still something that some users will recognize and possibly even know how to get some utility out of -- if nothing else, by copying the URL and pasting it somewhere for sharing or use or storage.

And that's the worst case scenario for URL utility. If we get into a recognizable scheme like the date-title path, there's a lot more apparent information and it can often be transparent how changing the URL can be an interface.

We've had an interface that allows users who don't grok URLs to ignore them OR learn by observation and experimentation how they work -- and allows users who are already there to easily note and access them for two decades.

This trend towards leaving them out doesn't allow that progression or utility.


The argument is about whether they mean anything to _people_, though, as part of the user interface.

URLs are hugely meaningful to software, and the ability to share them is critical, but that's independent of whether they are shown at the top of the window.

Semantic URLs are helpful, but they're an awkward tool for changing the interface as you describe it. If a web application is relying _at all_ on people moving URL path elements around to access the content they want, it has already lost.

I think URLs are fundamental to the web, but not fundamental to the user interface, which is an important distinction - I don't mind at all that in my native mobile and desktop application, there is no unique view identifier displayed at the top of the application window for every view available, and I think a mature web can equally hide, but not eliminate, that identifier.


The real power of URLs is for linking between sites, or linking to a site from another application. With HN as a website, it's easy for me to send someone a link to an HN comment thread over email or IM, and Hn doesn't have to specifically support that use case. If it were an application, that would depend on whether and how HN had implemented a "Share" feature.


Sharing URLs between machines is critical, but they don't have to be at the top of every window to be shareable. I agree that simple URL sharing should be a core feature of a browser - but a small 'share' button that copies the URL into my clipboard would suffice.


HN has subpar URLs (in terms of UX), but they're still very useful. I can copy the bytes "https://news.ycombinator.com/item?id=11770774" and use them to get back to this comment thread later from any device with a web browser. I can give these bytes to a friend to refer them to this thread. I can share these bytes publicly and have a reasonable expectation that they'll allow anyone to retrieve the content on this page (not forever, but the Internet Archive and other caches have your back if this URL breaks in the future).


Making the HN URLs more human readable would also make them much longer. However, the longer URLs could provide a form of site structure if the URLs are 'editable' e.g.

  https://news.ycombinator.com/2016/05/25/regressive-web-apps
Then, if you edit the URL, you can get all stories for today:

  https://news.ycombinator.com/2016/05/25/
All stories for the month (which would require pagination):

  https://news.ycombinator.com/2016/05/
And all stories for 2016:

  https://news.ycombinator.com/2016/
And finally, back to the home page

  https://news.ycombinator.com/
Note: all these pages don't have to be explictly linked or displayed on Hacker News, they might appear only when someone edits the URL in their browser.

Edit: Just to add...here's a lengthy, but very detailed post by an ex-BBC employee about using URLs to model the complex data for BBC programmes

Designing a URL structure for BBC programmes http://smethur.st/posts/176135860


Definitely! URLs that have meaningful structure (like your examples) are awesome because in addition to shareability they also offer discoverability without needing extra out-of-band information. As you say users can guess ahead of time what content they'll land on when editing structured URLs.

But that's a feature that unfortunately we can't assume every website will provide (it would be great if we could!). My point was that even opaque/meaningless URLs are incredibly useful to humans.


I wanted to like this article, but it's not helpful to argue against assertions by making evidence-free counter-assertions.

> The end result may feel very “app-like” if you’re using an approved browser, but throwing the users of other web browsers under the bus is the very antithesis of what makes the web great.

1. The Polymer demo app is a _demo_ app - it's a tech preview, not a recommendation to drop support for any other way of delivering web content. 2. Who decided "what makes the web great"? One of the things that makes the web great is that people can build powerful app-like experiences using standardized tools. Some browsers don't (yet) support those standards, and some users opt-out of allowing these standard tools to run (Javascript), but that's not the fault of the web, is it? Is the argument that the lowest common denominator must be served before spending any time and effort on making something more advanced? Count me 'regressive', then.

> The inability to pinch-zoom in native apps is a bug

I disagree with this assertion, although I acknowledge that it does have accessibility costs. It's at best naive and at worst disingenuous to claim that disabling pinch-zoom on the web is "slavishly copy[ing] whatever native is doing" - rather, some web apps are choosing to make similar tradeoffs as native apps.

> To declare that _all_ users of _all_ websites will be confused by seeing a URL is so presumptuous and arrogant that it beggars belief.

This is laughably hyperbolic. The linked Chrome issue simply disables an automatic prompt on websites that are configured in a very specific way - it isn't making a universal statement about URLs or whether or not they confuse users. To declare that the Chrome issue does otherwise is so presumptuous and arrogant that it beggars belief.

The final complaint that Lighthouse (a Google application) has a different standard for "best practices" than the author doesn't actually help his argument. Lighthouse isn't _the_ reference point, it's _a_ reference point, and one thing that web developers learn very early in their careers is that when it comes to the web, there is no one right way of doing anything.


Eventually the pendulum will swing back. The web app model beat out native apps (then called client-server apps) for very good reasons (no install, REST-fulness, simplicity, transparent linkability) but we've become blind to them through long exposure.

As surely as Water will wet us, as surely as Fire will burn,

the gods of the HTTP Headers, with terror and slaughter return.


Amen I guess?

I expect that for a lot of apps-that-really-should-be-websites their owners will realize that having to (pay to) maintain both their Android and IOS apps, and the 'normal' website is a costly affair, and that maintaining one responsive website is a lot more efficient.


"Progressive Web Apps" is a Google brand. It's made of some good best practices and great technologies, but they reserve the right to re-define it and change the rewards around it however they choose. Install buttons are the carrot, and search rankings are the stick. Will non PWA-compliant but otherwise fast, well-made mobile web sites rank as highly as PWAs?


Just a minor nit, yes we have been working under the Progressive Web Apps headline for a while (I work on the devrel team) but it is a shared term that Opera and Mozilla are using.

Search has no understanding of Progressive Web Apps to my knowledge. Performance, Mobileness (on mobile search) and TLS are all ranking factors of some sort.


Google isn't trying to make every website a PWA. Why would they punish all non-PWAs?


The current way that we see the banner is that it is the equivalent of an App install banner for web sites that meet the rough idea of what a Progressive Web App is and would be something that you would want to install or act like an installed app on the system.

The thought at the time of the change wrt to the fullscreen or standalone is that to get that app like treatment in the OS you would expect it to launch as an app would on the system (sans-url bar).

Note: we are also thinking of ways to get the URL bar back to the user when it is launched standalone or in fullscreen. It's a complex UX problem but we want to get the best of both worlds.


You can't please everyone, I guess. Who but the largest organizations has the resources to make a fully-responsive SPA that also has non-JS fallbacks for everything?


I think Jeremy's post resonates with a lot of our team (Chrome, DevRel etc), we want all experiences to be responsive, accessible, content visible without JS being required and most of all solid design that takes advantage of good URLs.

The App Shell model has worked well for a first set of experiences that have shipped and I don't think that it is inherently mobile only, many sites work well on desktop, instead I think I think a lot of it is a matter of resourcing and staffing. After working with a number of developers who are building Progressive Web Apps, one thing that I have noticed is that many of the teams building sites are still split between a "mobile web team" and a "desktop web team" and that is one of the reasons why I think we are seeing some of the sites being accessible on mobile first.

The place we want to get to is one experience that works well for all users in all browsers.

Specifically wrt to Banner prompting our thoughts around this have always been consistent, when an site meets the criteria of something that could be experienced as an app the user would see a prompt to "install" it (after meeting some engagement criteria). The thought was that prompting is a powerful feature and one we don't want to see abused and thus the criteria is quite strict and our expectation is that the site when launched should launch like an app would on the system. The user can still add to homescreen, we just reserved the prompt.

Some interesting bugs for people to follow wrt to Propmting.

* https://bugs.chromium.org/p/chromium/issues/list?q=component... - prompting and add to homescreen bugs

* https://bugs.chromium.org/p/chromium/issues/detail?id=604390 - Minimal UI mode for access to URL

* https://bugs.chromium.org/p/chromium/issues/detail?id=601624 - Discussion on criteria for prompting

* https://bugs.chromium.org/p/chromium/issues/detail?id=601337 - Mandating Offline to get banner.


This all boils down to compatibility. You can make your webpage or webapp work on a majority of devices by rendering most of it on the server and delivering it as html with javascript handling the bits that can't be handled by html/css.

Or you can make your blog, which is essentially text on a webpage, that is rendered entirely by javascript. You can have your css download as javascript and be processed and rendered by the browser ... but why? Because they're new toys?

I think our infatuation with javascript has led us to the "when you have a hammer everything looks like a nail" situation.

Javascript is great but most developers I interact with aren't writing javascript. They're writing jQuery or Ember or Angular. They don't understand javascript as a language, they understand the flavor of javascript their library of choice gives them. They don't think about the fact that sorting a table in a browser will be a problem until it hits production, with 10,000 rows, and users without the top-of-the-line hardware are screaming.

It reminds me of the Windows vs Mac debate, which is really just a debate over control. Windows didn't control the hardware it was installed on; MS couldn't be sure everything would work so there were layers upon layers of compatibility solutions. Apple controls (most) of it's hardware so it has a smaller playing field and it knows what the next commit to it's repo will do to the majority of it's users' experiences.

Javascript is the same way. As web developers, you (should) know what your servers can do. You can plan for capacity. Offloading everything to the users' browsers is just evil. Think about the kid with his $200 hand-me-down chromebook who really wants to learn about programming or the guy in Africa trying to visit your site over GPRS on his mobile phone. They shouldn't need the newest version of chrome with ES6 to view a blog entry.

There are times where a SAP is the right choice for the user. We just have to realize everything isn't a nail.

[edit: english]




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: