Hacker News new | past | comments | ask | show | jobs | submit login
Why I don't use web components (dev.to)
257 points by catacombs 26 days ago | hide | past | web | favorite | 114 comments



On my side, the criticism I could make to web components is that there is no standard to serialize their shadow roots and, therefore, they are not deserializable without using JavaScript. I have been maintaining SingleFile [1], a web extension to save complete web pages, for 9 years and this is the first time I have had to include JavaScript code [2] to attach and display the shadow root of the web components (e.g. embed tweets) included in the saved page.

[1] https://github.com/gildas-lormeau/SingleFile

[2] https://github.com/gildas-lormeau/SingleFile/blob/93d1e7c000...


Thanks for your work on this and good job not having to inject JS for so long.

I’m surprised there hasn’t been other nasty web stuff that required it.


Thanks! You can find web components in a lot of unexpected places. For example, this page [1] contains more than 10K web components... The good news is that once the Pandora's box is open, I had the idea to code SingleFileZ [2] which also requires JavaScript to be enabled but frankly uses it!

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=4

[2] https://github.com/gildas-lormeau/SingleFileZ


It’s always interesting what we volunteer ourselves to become experts in through our work in software. I respect any OSS developer that dedicates themselves to this type of thing.

I’m curious what google used to create that chromium bug report page (internal framework or a custom job). The internal tools targeting devs with good browsers will always have the most advanced stuff it seems.


Google uses LitElement (version 2.0.1) [1] to create the page. You can see it by typing `litElementVersions` in the JavaScript console.

[1] https://github.com/Polymer/lit-element


Right, of course it was their Polymer toolchain. Thanks


This looks really interesting!

Wasn't this the intention behind the older .mhtml format? I wonder what happened to that.

Also, do you have an option to choose to not save javascript?


You're right, MHTML is supposed to solve this problem. I wrote SingleFile because Chrome did not support MHTML 9 years ago. Now, it's not dead and Chrome supports it but no other modern browsers (not based on Chromium) supports it. That's why I continue to maintain SingleFile.

The JavaScript injected by the extension will be included if and only if there are web components in the saved page. By default, all other scripts are removed.


I manage a web application with hundreds, if not thousands, of screens written in Angular, Vue, React, and server-side JSPs -- all of it developed over a decade-plus worth of time. While old UIs will be continually be re-platformed into the new, we will always have a legacy monolith. That's the reality of our world.

If I want to introduce a design system, where all users of these technologies can work with a basic set of bread-and-butter buttons, input fields, etc., Web Components are the lingua franca of all of these technologies. That's why I use Web Components.

Now of course, not everyone is developing on monoliths globbed together with newer tech. But the reality is, most of the software used today is written this way. It isn't pure, functional, isolated modules / apps that startups going from scratch have the privilege of beginning with. Technology stacks slowly become the consequence of business acquisitions, etc. You can exert some control over things, but it will always be a moving target.


Personally, I just find the lack of being able to declaratively pass state and/or event handlers to child components painful. Not to mention that testing with a full DOM (virtual or real) is really slow by comparison.

React + Redux + JSS just really matches how I've always conceptualized about building web based applications. I absolutely favor material-ui as a component library. I've been building web applications for almost two and a half decades. React is the first framework that felt "right" to me, and still did after using it. I've used a LOT of options over the years.

I do think WC can be used with React in some cases, and that it's probably a better approach if you're building mostly static content and wanting to add additional interactive features outside of core display.

I also think in 5+ years there will be a couple library options that makes using WC a better option for application style development.


Web Assembly will maybe change everything.


I think it will bring about a LOT of changes for targeted applications, and most specifically gaming. I also find Blazor and a few other technologies related to WASM interesing. I do not think it will displace much as far as web based applications.

And all of that said, I happen to like JS. It's my favorite language. While I'm really green with it, I'd consider Rust my second fav. C# third. In the end, I find that I can be incredibly productive with JS + NPM and that cannot be understated. I'm able to leverage community repos that cover everything under the sun and write less than half the code and better cover my use cases for good enough performance. When I need more, I'll leverage other languages/tools, most of the time, I don't. And even when I do, I tend to lean on Node/JS for orchestration.


I agree it would be kind of a waste to throw away all of the tooling and innovations that has been made in JS ecosystem, just to start over because another language can be easily used due to WASM.

The list of languages is bigger that I would've thought: https://github.com/appcypher/awesome-wasm-langs

In terms of enterprise usage, I'm not seeing any mature polyfill implementations out there, but I did find https://github.com/lukewagner/polyfill-prototype-1.


Not trying to be rude, but there’s roughly 0% chance it changes everything.

It will change some things for sure. But we’ve had compile-to-js languages for some time and they’re a pain. Compile-to-ws is better, but still has the same fundamental pain points (debugging mostly, and then browser compatibility).

By changes everything, you mean no large amount of devs will keep targeting vanilla html+Js and I just can’t imagine that future.


Maybe the concept of sourcemaps could be applied to WASM to maintain the pretty awesome developer experience we've come to realize in browsers. I'm transpiling TypeScript -> JS and able to debug my code, with sourcemaps, in Chrome.


That’s true, it’s easier if you don’t have to do any cross browser testing.

But I still find the debugging very weird. The fact that the VM is stopped on a different line of code than I’m looking at seems to always introduce small little confusions. Have you never had that experience?

It’s possible I’m just overdue to try the tools again. I’ve had a lot of bad experiences.


I've started using web components in my last project and it has been very helpful in compartmentalizing functionality and promoting resue. Things are a bit different in my experience than what're described in the blog. Here're the points as seen differently.

1. Good point, though someone is going to write a server side renderer to pre-generate the custom tags into regular html tags.

2. A web component can import its own CSS file, no need to put styles in JS code or string.

    <style>
      @import "/css/shared-styles.css"
      @import "/components/my-widget/my-widget.css"
    </style>
5. Composing with sub-components works just fine with or without slotted content. What's wanted in item 5 was a special case for dynamically including content at runtime. That's best left to Javascript when triggered with property setting.

6. For properties and attributes, it's simplest to just reflect all property get/set calls to attributes, using attribute as the only storage. I have some helper function to do that.

    attrAsProp(comp, attr, property)
        Object.defineProperty(comp, property, {
            get: function()  { return comp.getAttribute(attr) },
            set: function(v) { comp.setAttribute(attr, v)     }
        });
9. Global namespace is a problem but it also makes things simple to use. We don't want complicate stuff like xmlns.


> (The three spec editors are all Googlers. Webkit seem to have some doubts about some aspects of the design.)

Here's my issue with a browser monoculture. With 3 or 4 major browsers, it was easy to tell what features would survive. The specs were never great at that -- there were some early spec'd features that never got widely implemented. But any feature that could convince people at Google + Microsoft + Apple + Mozilla to all implement it would be around for a long time.

With Microsoft switching to a Chrome-based engine, this is a valuable data point we've lost. Essentially, instead of new features being opt-in by default, they're opt-out. We've seen it before: when one browser has a majority market share, nobody else can afford to tell them an idea is bad.


Microsoft has argued that a potential benefit of them spending full time in the Chromium codebase is potentially a bit more power to curb some of the worst of Google's folly by making sure things remain opt-in by default and things that haven't passed specs approval are behind experimental flags and such.

So long as Google follows the ideals of Open Source Governance on the project and listens to PR feedback, Microsoft might not be wrong about that.

That said, Google also has a history of taking their ball and going home when they aren't getting what they want from Open Source Governance (the increasing divorce between AOSP and Google Play Services for example, or closer to this particular debate the fallout with Apple that caused the Blink fork from WebKit). I'm almost tempted to bring popcorn to the Chromium mailing lists in anticipation of whatever drama is about to happen.


This is a good point, but I do feel you're missing a step, and it's one open standards folks sort of gloss over because it's kind of icky. A feature doesn't directly "convince" other browser teams to implement itself. The process, instead, is more like:

1. Browser A adds new non-standard feature.

2. Users like it enough to start using it widely in their apps.

3. Other browsers are now motivated to implement the feature so that those apps work in their browser too.

4. Because of that, there is now incentive to standardize. Finally, the feature becomes a standard.

The sketchy part is that step 2 only works when users deliberately choose to build apps that only work in a single browser. That idea anathema to standards folks, but as far as I can tell, the consensus process does rely on it most of the time. That's how we got JavaScript, XHR, <canvas>, etc.

It's very hard to convince several independent implementation teams that a feature is a good idea if no one is using it yet.


Doesn't Chrome follow w3 standards though?


Kinda, the W3C are now following WHATWG, so as Google are the major stakeholder by marketshare it's more like the tail wagging the dog.

- https://www.neowin.net/news/w3c-and-whatwg-agree-on-single-h...


Haha that's nuts. I see your point.


The Web Component API is too imperative. It's hard to composite, and too easy to let data out of sync (not being reactive).

For a broader meaning of Web Component (plain React, Vue, etc), they provide good encapsulation and some degree of local reasoning. But harder to communicate and stay sync with other components/server.

For Web Component with a centralized store approach (Redux, Mobx Vuex, etc), it breaks local reasoning, and it requires the user to manage the life cycle of data instead of just reuse the life cycle of components.

So there is still quite some space to improve. I found the idea of GraphQL co-location is pretty interesting. And I also found there's a ClojureScript library named Fulcro[0] specifically trying to provide the ability to have a global data store while keeping the ability to local reasoning.

[0] https://github.com/fulcrologic/fulcro


Web components APIs are low level. They allow code to know where to do things (the element instance, the shadow root) and when to do them (constructor, the lifecycle callbacks).

They are not a direct analog to modern frameworks, and it's completely expected that developers will use helper libraries to get better DX like declarative and reactive components.

I work on two such libraries: lit-html and LitELement. Combined they give you just as much declarative and reactive power as frameworks, much better DOM and style isolation, better performance, and smaller size than just about anything else out there.

Compare a styled Hello World example to React or any other framework:

    import {LitElement, html, css, property, customElement} from 'lit-element';

    @customElement('hello-world')
    class HelloElement extends LitElement {

      static styles = css`
        :host {
          display: block;
        }
        p {
          background: pink;
        }
      `;

      @property() name;

      render() {
        return html`<p>Hello ${this.name}!</p>`;
      }
    }
(this example uses decorators because I think it's clearer, but we support plain JS as well)


> ... and smaller size than just about anything else out there. Compare a styled Hello World example to React or any other framework:

Compared with svelte from the article:

    <script>
        export let name;
    </script>

    <style>
        p {
            background: pink;
        }
    </style>

    <p>Hello {name}!</p>
# Size comparison

## lit-element

lit-element.js?module: 8421 bytes

lit-hello.js: 340 bytes

## svelte

bundle.js: 2628 bytes

bundle.css: 72 bytes


1) That's the raw unbundled module source from unpkg. Bundled, minified, and gzipped lit-element + lit-html is about 6.7K.

2) Svelte bundles the runtime operations that each component uses directly into every component. HelloWorld doesn't use much, so it'll be small, but most component are not HelloWorld and most applications will not have only a single component. The cost of the common dependency on LitElement is amortized over all the elements that use it.


> Bundled, minified, and gzipped

Why gzipped? Bytes of source is a fairer comparison. Parsed source code should be the comparison, not transfer time.

> HelloWorld doesn't use much, so it'll be small, but most component are not HelloWorld and most applications will not have only a single component.

Then why dare other to compare sizes of a Hello world example? :-)


Huh? I'm not comparing the size of hello world examples.


Then I read your first comment wrong, I was referring to my quote in https://news.ycombinator.com/item?id=20236130

Regarding the comparison in source bytes, here it is:

minified lit-(element + html)[1]: 21.4k

I'm not sure it "amortizes" svelte when having more components, when runtime in hello world example is less then 8x size. Would love to see an example. Tried finding lit-element in "Real World Example" [2], but did not find it. Maybe you have some pointers?

1: https://bundlephobia.com/result?p=lit-element@2.2.0

2: https://www.freecodecamp.org/news/a-realworld-comparison-of-...


Looks a lot like css in JS to me, have you considered simply using the platform instead?


This passes the CSS text to CSSStyleSheet.replace(), so it very much is using the platform.

CSS Modules will make it possible to directly import a Stylesheet: https://github.com/w3c/webcomponents/issues/759


It's a problem of mixing data and visual elements.

In a good model of the web, html is your data, css is your styling, and javascript doesn't really exist.

In another model of the web, your data would be your data, html/css would work together to form a scene-graph. Javascript defines the relationship between your data and it's scene graph.

That friction, are we using html as data or as part of our scene-graph, is what causes a lot of these problems.

I like web components because they bring you closer to having html be just your data. I like react because it brings your html closer to being just your scene graph. Both solve the same underlying problem from different directions.


Well, I thought the 'good' model of the web was schema-based XML was your data, XSLT was your representation structure logic, HTML was your representation structure, and CSS was your representation styling. But the ship sailed on that one a while ago...


The road to hell is paved with XSLT. It was a good idea, but early 00s X-everything were terrible implementations.


XSLT is an unpleasant language (particularly for beginners) but implementations I've seen tend to be correct and stick to the standard; what trouble did you find yourself in?


> Well, I thought the 'good' model of the web was schema-based XML was your data, XSLT was your representation structure logic, HTML was your representation structure

Not XSL:FO?


Good article. This articulates a number of issues that I wasn't able to put into words, particularly around this idea that the DOM can be a really inconvenient way to track state.

I still kind of feel like I need to sit down and figure out exactly what my issue is with the current spec, but roughly speaking I guess I'd say that the DOM should be a standardized display layer, not a way to store internal application state. Or put it another way, the DOM is for your users, not for your application. I don't think that fully captures the problem, but it's a bit closer.

Regardless, I often flail when people ask me why I'm a little skeptical about the way web components are currently designed, so at least now I have something to point them towards.


When you use radio buttons, checkboxes and text inputs, they have state and that state is in DOM. Web Components don't change this.


Again, this is tough to articulate, and I'm not sure I'm going to do a good job, but:

Good design seeks to limit the amount that we rely on that state in the DOM. Typically speaking, I try to listen for events on input elements. I don't rely on them as a state store.

Text inputs in particular here are kind of a good example, because they often get used in combination with validation and autofill. I've seen a lot of bugs and spaghetti code come out of running validation on field update, and using that validation to update or fill in other fields, and needing to track which update will trigger another event, and so on.

I'm not saying we should get rid of interactive elements on the web, but I think they fall into something of a grey area that shouldn't necessarily be encouraged. I use inputs to get the current user state, and to respond to user input -- I try very hard not to use them as a state store for application-level data.

I kind of think I would still say the same thing about inputs as I would about regular HTML elements -- input elements are for your users, they're not for you. They're not your interface, they're just the way your users communicate with you.


I find it so funny how people use to make fun of ColdFusion cause of it's tag syntax and the ability to create your own custom tags. Seems we have come full circle now on the frontend. ColdFusion was truly ahead of it's time.


I think this is missing the forest for the trees. The key innovation behind most modern frameworks is DOM abstraction and reactive data flow. The main reason JS apps were, at one point, ugly spaghetti directly manipulating DOM structures using jQuery is because JS was dog-slow at a point and was not really capable of doing the cool object oriented stuff you could do on desktop apps. jQuery answered to the problems of the time: it was relatively fast, provided a terse interface (vs DOM,) and probably most importantly, it smoothed over browser bugs and inconsistencies. Angular.js finally popularized components, in the post IE6 era when JavaScript execution was a lot less slow and unreliable, then React popularized reactive data flow and virtual DOM.

I think most good ideas in software engineering have already been discovered, and primary innovation has long been applying existing ideas more effectively. Hard to argue Angular.js or React does anything that has genuinely never been done before, and same to ColdFusion.


I would argue that YUI and Dojo where the ones that popularized components and angular 1.0 directives where a step backwards in truly encapsulating components as you could bleed scope all over the place. I agree with you on JS speed.


This is fair. I didn’t use YUI too much but in retrospect it was pretty modern for it’s time.

Still, I don’t think YUI necessarily had the mindshare the same way Angular or jQuery once did...


> JS was dog-slow at a point and was not really capable of doing the cool object oriented stuff you could do on desktop apps.

Absolutely wrong, Javascript was always fast enough to make web applications that had the responsiveness of desktop apps. The problem is that nobody was writing desktop apps in the browser; they were writing server-generated web pages that offered the users some extra interactivity via js. So they had to look for elements in a static template, and hook into them or perform transformations on the fly, instead of generating everything programmatically as it's done now.


You dismissed my claim without any numbers, so too shall I. Of course it was slow, extremely so. Pointedly, Objects were Especially slow, and even a performance cliff in V8 in many circumstances. Take a look at a React flamegraph, it’s incredible how deep and wide the callstacks are, for things that can easily run 60 times a second. You couldn’t do that years ago.

IE6 and 7 were so slow in both JavaScript execution and DOM that it’s silly anyone would ever claim JS execution was “fast enough.”

The art of fast JavaScript was once arcane, but its clear we’re so far into fast JS execution that nobody even remembers how awful it was. Especially when you had to support IE.

Oh, and let’s not forget the incredible increases in CPU speed. Even a later Core 2 Duo machine I have is awful for browsing the web, basically unusable without uBlock Origin.


Sorry, I dismissed your claims because I was developing javascript single page applications in 2005 on IE6 (and on IE5 at the beginning, I think). That was before XMLHttpRequest, I was retrieving data from an invisible iframe populated with arrays. I was able to develop crud applications and dashboards with hundreds of objects with (almost) desktop-like responsiveness. Of course there was no jQuery, elements were created programmatically and I kept a pointer to each of them, as it's normal in desktop apps. Never thought of this as arcane, it was actually very natural.


I don’t know why you assume I hadn’t been doing web development in 2005, but of course. I attempted SPAs and games in JS for a long time, and we have very different ideas about what performance means!

I mean sure. Even pre-XHR you could write apps that were effectively SPAs, but manually manipulating HTML elements in your business logic isn’t exactly what I’d consider “modern.” Desktop apps at the time, even ones written on Win32, tended to put a couple layers of abstraction around the underlying resources, to say nothing of Qt and its entire programming model worth of abstractions. YUI, mentioned above, was already a significant step up from that.

But those toys were nothing compared to what you can do today. Today you can literally compile the entire Qt libraries and a whole Qt app to JS and run it on Canvas and it’s usable on a fucking phone. Part of that is w3c typed arrays (and WASM, for extra perf,) but most of it is just years of raw optimization and CPU speed improvement.

In 2009, I was still struggling to draw 60 frames per second (important to note: consistently. It was actually hard, at higher resolutions.) on canvas 2d. Just a few years later Unreal and Mozilla demoed Unreal Engine running at 60 fps in pure JS and WebGL.

No reason to diminish what I’m sure you did accomplish in 2005 - it was probably amazing for 2005 - but I absolutely disagree that we didn’t experience life-changing performance improvements. The performance improvements we got enabled huge React and Angular apps to be possible. If you don’t believe me... again, try browsing the modern web on old machines. I've just recently set up a machine with an 800Mhz PowerPC from ~2004. Even with fairly modern Firefox, and adblocking, modern sites are basically unusable.

Addendum: today’s bundle sizes also would not have worked too great on older internet. Another performance characteristic that is largely taken for granted today: parsing and network speed!


> I absolutely disagree that we didn’t experience life-changing performance improvements.

Of course I don't disagree with this either. But this (as well as trying to run modern internet websites on older hardware) doesn't mean much- it's a fact that software always fills up all the capacity allowed by hardware, sometimes with features, but more often with frills and more layers of indirection. What I disagree with is that javascript (and browsers) were too slow to make interactive web applications: in fact they were possible, and the ones we still use were born around 2005: gmail (probably faster then than now), google spreadsheets, maps (I made a complex dashboard in 2005 incorporating microsoft maps, that became bing) etc.

Programming was much more "close to the metal"- as much as this means anything in a web browser- but the results were usable- not as graphically fancy or feature rich as today's, but ok for many use cases.

Bundle sizes also have exploded as a result of the complexity of the frameworks. But that doesn't mean the applications are more interactive or have more features, they're just fancier and, from the point of view of developers, more maintainable.


The problem with CF was not making up tags. The problem with CF was that the library was a freakish spaghetti mess, the language made no sense, didn't interop with anything open source, and kept being repositioned for different uses as it failed so it had no coherent niche.

It was pretty much a proprietary PHP before PHP was somewhat rehabilitated, except it somehow made less sense.


I agree. The tags argument is usually by those who never or barely wrote CF. Those who did (long-time CF dev here) have more than our fair share of gripes with the language (If a CF dev doesn't have issues with the language, it's because they've never learned anything else and are desperately trying to defend the fort)


CF has some neat features, like its in-memory SQL engine. However, the the list of features the language never adopted that everything else supports (first-class environment variable support, package management, modularity, etc) are far greater than its innovative features.

Most modern front-end frameworks aren't just custom tag languages; their strength is in component lifecycles. ColdFusion never really had good support for events at the custom tag level.


I'm not sure that paradigm ever ceased. ASP.NET webforms XAML had custom namespaced tags, now ASP.NET Core has custom tag helpers.


WPF was react before react was an blob idea in someone's brain. Damn I wonder how 40+ year old engineers feel about tech. "We've seen this shit when your dad was in his teens!"


We largely ignore it, until it actually becomes a project requirement, and then we learn hands-on.

If we are lucky, most of them die in a space of 5 years, and then you get to enjoy project contracts to port all those shiny apps to boring sound, long lasting technology stacks.


Contracts to port all those truly ancient proprietary stuff to truly ancient non-proprietary stacks are fine too.

http://www.tpfug.org/pdf/2019/2019-TPF_Modernization.pdf


my man


I like the abstraction that web components provide. An encapsulated of logic, view, and namedspaced style is great. I can build things like a button with a loading icon.

But I can use Vue's component which don't care about the standard and just compile everything down to a plain JS, so it works well with any browser.

On the other hand, PolymerJS tries too much to utilize the web component standard. I didn't originally thought that this would be a bad thing.

Svelte does look good. I checked it out a year back and thought that Vue should've moved more stuff into its compiler if it could.


My biggest issue with WCs is that they only solve a small part of the problem. You get custom tags but without data binding and some form of reactivity these are not very helpful.

Lit, Skate, Stencil, etc, solve those limitations, but if you are already using a complicated toolchain and third party code for doing the heavy lifting I don't see much value in WCs.


It’s a bit like having a stdlib type specifically for Sets or URIs or Rational numbers in a programming language.

It’s not that any given library couldn’t make its own. It’s that having a single implementation “canonized” in the stdlib means that every library that wants this functionality will be using the stdlib’s implementation, and stdlib functions will also be able to consume or produce that type when it makes sense, and so you’ll have an ecosystem that all interoperates by passing around and transforming this shared type.

Also, standardizing Web Components also means changing the semantics of DOM parsing for user-agents other than browsers. My random Python HTML crawler lib isn’t going to parse some third-party library’s implementation of Web Components; but it is going to parse standardized HTML Web Components.


The problem is that Web Components give you an implementation standard, but not a standard for what components are named or how they work.

Your random Python HTML crawler is welcome to try and parse standardized HTML Web Components, but since every single one is going to be named differently and have separate controls and interfaces for accessing data, your crawler isn't going to get very far.

I am moderately concerned that Web Components are a step backwards for the semantic web, not a step forwards. Particularly after seeing other comments on this post that the shadow DOM is only accessible through Javascript in some situations. To me, web components feel over-engineered. I'm not sure exactly what it is, but I can't help but feel like there's a simpler answer to this problem that would have worked better for everyone.

Maybe a better, Javascript free implementation of HTML imports? The last spec got rejected for very good reason, but it's not like the core idea was necessarily bad. Then maybe throw out Shadow DOM entirely and replace it with some kind of more flexible/accessible scope system instead?


Yes that is nice but performance and bloat are much more pressing issues IMO.


You don't need a complicated toolchain. This 200-line lib let you create and use Web Components using JSX syntax, just like React.js: https://github.com/wisercoder/uibuilder


And that library requires TypeScript and you probably will need to minify afterwards.

Also:

> Unlike React.js UIBuilder does not do incremental screen updates.

Which means you now have to start worrying about which parts you have to update. So again, you probably need something like MobX.


Good stuff. I would recommend VSCode over Visual Studio.


Can someone succinctly tell us what problem Web Components actually solve?

Is it about avoiding namespacing your CSS? Is it about shipping reusable components for many sites, hosted on some other domains? Loading JS on demand? What?

It just seems overly complex and limited with its slots etc.


The big advantage from my perspective is component reusability independent of Framework (React, JSP, Angular, none, etc). A couple scenarios come in mind, you have many teams working on web app(s) that need to have the same styling, CX, controls. You can distribute them this way without tying them to a framework. Another example is complex controls that can be downloaded to speed up development.


There is also robustness. Today you can't take a React component from one application and drop it in another React application and expect it to just work. You may need to also copy CSS classes, make sure the class names are unique, remove any conflicting ids, global variables and so on. Thanks to Shadow DOM, web components are much more robust. You can just drop a web component in an existing application and expect it to just work, regardless of what framework (Angular, React etc) it is using.


How is it better than using, say, iframes and postMessage? Is shadow DOM more efficient? Can Web Components work across domains?

I want to have Shadow DOM that encapsulates trust, so the enclosing parent javascript cannot access its contents!!

https://www.w3.org/Bugs/Public/show_bug.cgi?id=20144


Except you can, just use styled-components. If you don’t have any other deps except styling, it works quite nicely.


I have a bit of a contrary view on a few of the items, not that I am defending web components, I have my own issues about them but my point is to offer a different perspective on a few of these items:

1. Progressive enhancement - but I think that websites should work without JavaScript wherever possible

At a certain point, backwards compatibility becomes more a of hindrance than a help, in an effort to support an ever shrinking demographic. Now I will give the author the fact that they used the word website, where my frame of reference is building modern web applications, falling back to no-javascript to me would be going back to server rendering and the dumpster-fire of page-post AS/JS/PH/P's.

2. CSS in, err... JS

Separation of technology, is not separation of concerns. This has been argued over JSX and it has been shown to be a huge productivity improvement. If everything for a component is internal to that component and does not need to be reused in a way other than using that component then there is no reason to separate the technology. As both of their concerns are the UI component. In that regard, where everything is encapsulated to the component (HTML, CSS, JS) it is actually a better pattern to not separate technologies as it makes the component easier to reason about.


I feel like web components are more true to the original intent of the web instead of these SPA frameworks in javascript.

You define your page with an XML document of UI components and javascript exists for animation and data fetching.

I could be wrong but that's just what it seems like to me.


Nope.

You define your page with custom html tags that won't even render properly without Javascript. And then you need Javascript for literally everything: initialising the component, data fetching, inserting data into the DOM, updating data in the DOM etc.


When I used SPA's it was basically web components inside of javascript so seemed like a layer of unnecessary indirection?

I'm SURE web components allow a non-js fallback, or that should be introduced into the spec.

See here for progressive enhancement of web components https://googlechromelabs.github.io/howto-components/howto-ta...

Dont' SPA's require defining multiple components? A non-js component inside of the html page and a JSX component or whatever inside of the SPA.


> When I used SPA's it was basically web components inside of javascript so seemed like a layer of unnecessary indirection?

In vast majority of cases it's nowhere near "basically web components".

> I'm SURE web components allow a non-js fallback, or that should be introduced into the spec.

They don't. All they can do is render static content inside if they have any.

> See here for progressive enhancement of web components

I fail to see where you see progressive enhancement. It's literally dozens of lines of Javascript code. If you disable JS on that page, you'll get just the static content with no functionality.

> Dont' SPA's require defining multiple components? A non-js component inside of the html page and a JSX component or whatever inside of the SPA

All you do is attach the root node of an SPA to any element in the DOM. A common approach would be:

    <body>
      <div id="app"></div>
    </body>

    // elsewhere in JS code
    X.render(document.getElementById('app'), rootNode)
where X is your framework of choice.


>In vast majority of cases it's nowhere near "basically web components".

What else does it do besides render JSX components and manage their state just like a web component except with more indirection?

> All you do is attach the root node of an SPA to any element in the DOM.

You missed the point of what I was saying

The root node is a single node...where do you define the rest of your components? IN JAVASCRIPT. IF you want progressive enhancement you have to define a SECOND SET OF NODES IN HTML. THIS IS REDUNDANT.

>I fail to see where you see progressive enhancement.

Why don't you actually read the thing? "If JavaScript is disabled, all panels are shown interleaved with the respective tabs. The tabs now function as headings."


> What else does it do besides render JSX components and manage their state just like a web component except with more indirection?

1. There many more ways to implement SPA than JSX.

2. WebComponents have exactly zero state management capabilities, and you need to implement your own ways to deal with state

So, SPA frameworks provide the following:

- state management

- optimised/batched updates (they can flush to the DOM efficiently thus avoiding jank, unnecessary re-paints and reflows)

- async rendering

- easy fallback states for async data fetching

- data fetching

- data propagation (for example, from parent to a deeply nested child)

- data binding

- render targets other than HTML (example, canvas, webgl, native mobile apps)

I'm definitely missing more. None of the above are provided by web components, and you will need to bring your own frameworks/libs to cover all those points.


Some of those aren't SPA specific. They're just javascript. Some are done by external libraries like redux. Some are benefits, however, two way data flow is a janky model in itself that could possbily be done away with by a nice data structure imo.

Also, this SPA cargo cult has been around a lot longer than Web Components which have just the past year or two become cross browser compatible.

I expect great things from them.

Web Components and their associated libraries can and will do all of those as they grow.


> Some of those aren't SPA specific. They're just javascript.

What do you think SPAs are? They are just Javascript. You add all that Javascript on top of web components and you ... end up with an SPA

> Some are benefits, however, two way data flow is a janky model in itself that could possbily be done away with by a nice data structure imo.

Quite a few frameworks don't have two-way data binding. React famously doesn't.

> I expect great things from them. Web Components and their associated libraries can and will do all of those as they grow.

So, they can't do those things and you're asking "what are SPA libs good for"?

You clearly have little to no experience with web development.


> They don't. All they can do is render static content inside if they have any.

What would a non-JS fallback look like that isn't this? Where would the functionality come from if not JS?


Not just functionality. Even the styles are broken if you disable JS.

That's not "graceful degradation". It's full loss of functionality. Whereas graceful degradation is

    Graceful degradation is the ability of a computer, machine, electronic system or network
    to maintain limited functionality even when a large portion of it has been destroyed 
    or rendered inoperative.
However, I was primarily responding to "javascript exists for animation and data fetching". As you clearly see, it exists for way more than just that.


> Even the styles are broken if you disable JS.

This is outright false

"If JavaScript does not run, the element will not match :defined. In that case this style adds spacing between tabs and previous panel."

https://googlechromelabs.github.io/howto-components/howto-ta...

The styles still run....


The styles defined _in_ the web component do not though (none of the `flex` styles are there). If the component was any more complicated than a pair of `<slot />` elements then the DOM structure would also completely change (and any ::part and ::theme targets would be gone too).

When JS is disabled you are left with a bunch of arbitrary tags in a structure that doesn't match the structure they have when JS is enabled.

That said, I don't think there's a good answer to this problem. Components-as-macros would run without JS, but then you have the LISP problem with composing macros (maybe this is a solved problem, but I haven't read Common Lisp's spec or used Racket in anger).


I don't think you're accurate on this. They're a browser spec not an drop in library. I don't know why the makers of the browser engine would ship something like that.


Because the browser engine didn’t ship the component, I did (or you, or that guy Harry who doesn’t know what he’s talking about, take your pick). The component only provides its styles when the JS is run (you can look at the definition). There is no way to make a component that doesn’t require JS (HTML imports were one attempt, but they didn’t work out).


It seems odd to rail against custom HTML tags requiring code to drive them. Unless you want custom tags to basically just be text-replacement macros for built-in elements you're gonna need some code somewhere to actually make them work.

In a parallel universe where web components came first the built-in tags would themselves be web components backed by JS instead of having native implementations.


I was primarily responding to "javascript exists for animation and data fetching". Javascript is the core of WebComponents for better or for worse.


Not accurate at all.

Native Web Components are a core spec in the BROWSER.

A web component can be completely styled and created without Javascript.


ANd what would be the purpose of such a web component? ;)


Progressive Enhancement...


> You define your page with custom html tags that won't even render properly without Javascript.

html5 encourages custom tags, and can be styled with plain CSS?


I've never considered #9 (global namespace) until yesterday, but it seems like a show stopper for dynamic web components, whereas a system like React has components simply being JavaScript objects that can be imported and whose name is largely irrelevant except at JSX (or HTM[1]) parsing time.

[1] https://www.pika.dev/packages/htm


It's being worked on: https://github.com/w3c/webcomponents/issues/716

The fundamental difference here is that HTML needs to know what class to instantiate when it sees a tag. Supplying that unlocks a lot of power, like custom elements in the main document, Markdown support for free, easy integration with existing CMSes.

We can and will solve the single namespace problem though.


Thanks for the link.

By the way I just noticed you're the maintainer of lit-html. How does it compare to HTM[1]? They both look like they accomplish the same goal but HTM's implementation looks smaller.

[1] https://github.com/developit/htm


htm is a nice bridge, because it works with React and Preact and both allows for development in standard JavaScript as well as being a runtime perf boost because strings are a lot faster to parse than nested function calls.

But htm is just a shell over VDOM (that's why it's small, you still need the real VDOM implementation).

React + htm still suffers from its use of VDOM and the diff that necessitates. lit-html doesn't do a diff because it remembers where the dynamic expressions are located in the DOM, so it can directly update them.


Oh so lit-html has the incremental DOM update feature built-in? It's a full-fledged replacement for the VDOM? I thought it was just a syntax parser like HTM. (Hence my question.)

[Edit] Oooh, I see where I got mixed up: in your README, the only part where I can find mention of this feature is in the very last part of this line:

> "lit-html templates are plain JavaScript and combine the familiarity of writing HTML with the power of JavaScript. lit-html takes care of efficiently rendering templates to DOM, including efficiently updating the DOM with new values."

Specifically the "including efficiently updating the DOM with new values" part, that's the only mention on this page, and it's literally right after where I started to skim due to (incorrectly) assuming the rest of the sentence was just a continuation of the first part.

That feature should probably be mentioned more prominently.

Also, if it has the ability to re-render based on state updates (e.g. from onclick callbacks), an example would be really important to know how to make use of that feature.


Everything is a give and take. At large organizations I've been at, web component frameworks - especially strictly typed ones like Angular - have tremendously helped to standardize web development across a large team of varying skill levels. When there's only one standard way to make a GET request and a defined interface that the result has to meet, output is much more stable.

On the other hand, these frameworks can be completely unnecessary overhead for one or two person teams. I'm currently using Preact for personal work, just because of how simple and lightweight it is.

To the author's point on site's that work in older browsers or without JS, unless you're building a product for government use or for an org that has really archaic IT requirements, the chances of needing polyfills that aren't provided by default in the framework are really low. The chances of someone needing your site to work without JS is also really low, especially if you're providing any sort of complex functionality.


I think you're reading "web component" as a generic term, but the author is using it to specifically refer to Web Components as a specific browser technology: https://developer.mozilla.org/en-US/docs/Web/Web_Components

You can sort of imagine it as another generic "web component" framework, but blessed into full, native browser support, and getting some features/technologies that required browsers to implement new features to support them.


"site's that work in older browsers or without JS"

I'm curious about this. A graph of say, what percentage of the top 500 websites can't run without JS, over time, would be interesting.

My assumption is that it's a non-trivial percentage, and growing.


As a past user of NoScript and current user of uMatrix, I can tell you a lot of sites do not degrade gracefully without javascript...


"The top 500 websites" aren't relevant. Their needs and tradeoffs are different, and of course they tend to be negative examples.

Does your site work without JS? If not, what justifies the more complex development and the degraded user experience for you and your audience?


This has nothing to do with my site. I'm curious what the trend is amongst the most visited sites. You sure read a lot between the lines that isn't there.


I think you're right, but that number should be correlated with number of users running browsers with JS turned off, which I suspect is trivial and shrinking.

The biggest issue I've seen with virtual-DOM sites is SEO. Google's crawler started using JS a number of years ago, but there are others that don't.


> older browsers or without JS

For sites that just convey text, they should work without JS. This would mostly solve accessibility (a11y) issues if sites looked to use simplicity wherever possible instead of adding complexity to solve issues caused by complexity.


If www was invented in this decade we would have a dozen mobile and touch-friendly events and html element types.

And if we had those element types mobile web would be easier to use.

So Chrome and Firefox please invest in new HTML elements for mobile touch web..


Looking forward to the day when Django templates can observe realtime changes in user input (aka the equivalent of R Shiny).


(This might sound snarky so I apologize in advance for that.)

Just use Elm-lang.

In all seriousness, what is the business-value argument against using Elm?


Hiring/staffing/ramp-up expenses for a team are the majority in comparison to most other expenses the typical SMB will incur. Choosing a widely used technology (or a specific one that is very common in a particular industry) will normally provide greater long-term benefits and potentially reduce risk. "Nobody ever got fired for buying IBM" was a truism 50+ years ago - and still is today in many industries.

Edit: grammar, speeeling


I'm not sure I understand exactly what you're saying, but it sounds like a kind of argument I used to encounter back in the day when trying to get people to pick Python over Java, to wit: there are so many more Java programmers than Python programmers, it might be hard to find people.

My response was, why would you hire a Java programmer who is unwilling or unable to learn Python?

In this case, I would hire a normal person who was good at Sudoku and teach them Elm before I would hire, say, a React specialist who refused to pick up Elm.


Terrible backwards compatibility.

No escape hatches: The only one who is allowed to do bindings to Javascript is the maintainer.

No typeclass mechanism or similar.


Thanks for replying. :-)

> Terrible backwards compatibility.

At version 0.19...

> No escape hatches: The only one who is allowed to do bindings to Javascript is the maintainer.

I trust Evan Czaplicki. Did you read his thesis[1]? M'boy can think.

> No typeclass mechanism or similar.

Okay, but then my question becomes what business value are you unable to realize without a typeclass mechanism?

In other words, what is the vital use-case or feature that is so much easier to implement with a typeclass (relative to some other, less elegant solution) and that is so valuable that it counter-balances or outweighs all the benefits of using Elm?

(Same question applies to JS escape hatches. And, in theory, if the payoff was high enough you could fork Elm and patch it to do what you needed.)

[1] "Elm: Concurrent FRP for Functional GUIs" https://www.seas.harvard.edu/sites/default/files/files/archi...


[flagged]


Why not reconsider and stop saying it.


This article was written by the developer who wrote Rollup and Svelte


[flagged]





Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: