DOM is such a crufty old API/data structure to work with and with CSS, computationally it's a pig. Anything we can do to isolate bits and bobs on the page from one another and make DOM easier for the browser to render and emit events for, the better.
I just think the Shadow DOM and Web Components are too complex to work with as designed. The componenty composability of React definitely seems real to me, whereas I don't have such optimism about Web Components.
All I would want/need is a lightweight version of iframe whereby it functions like a javascript security and css styles sandbox, call it a subwindow. A subwindow could load html/css/js independently from the parent page like a Web Worker does, or it could borrow from assets the parent page has loaded and its HTML could be inlined as innerHTML statically/dynamically from the parent page (at creation time.)
A subwindow would have an independent global/window object from the parent window. And, you could postMessage/onMessage communicate with the parent window if you needed. It could be constructed with its own dedicated DOM thread, or it could schedule on the parent's thread if you didn't want another thread spawned. Paints would have to be synchronized between the DOM threads, which could be a bummer.
I just want an inline iframe, which I know is like saying I want an "inline inline frame". ;)
You don't explain how you think Shadow DOM is too complex.
They're simple to me: basically just isolated tree of DOM attached to the document. The only slight complexity comes from projection, which really is just a way to plug in a child element into the shadow. Projection is absolutely necessary for composable widgets, you couldn't create containers without them.
One issue I have with this is that I won’t be able to apply a userstyle to restyle a page anymore, and I won’t be able to use websites without JS anymore.
Just earlier today I had an issue where I was trying to read this article [1] and the page, after loading for around 3 minutes (I was on mobile and on the free plan of my ISP, which gives only 64kbps, but unlimited), had all the content, but neither JS nor CSS.
There are already many websites which I can’t use at all anymore because they make everything invisible until JS and CSS are loaded. Flash of unstyled content is not an issue for me, it’s the whole reason I am able to read pages.
> One issue I have with this is that I won’t be able to apply a userstyle to restyle a page anymore
With the caveat that I only sorta understand this stuff -- if it emerges this way after the politics & process, you should be able to use the 'deep combinator' >>> to apply styles even across shadow DOM boundaries.
Solving the problem of composability of visual components on top of the existing DOM leads to chicken-or-egg problems like getComputedStyle/offsetTop mentioned in that Mozilla blog link someone else referenced. DOM is a crufty stateful beast, and enhancing it is hard which scares me away.
Beyond that, the Web Components spec seems geared for coarse-grained components - like a widget or a panel. I could be wrong about this, I admit, my experiments a year ago with Web Components were short lived when I saw how much got polyfilled on Firefox. Anyhow, React is a javascript library exercise rather than a retool-the-DOM exercise to me, and that's a big reason I appreciate it. It offers data binding, rendering, templating, and true-componentization all in one package - and it supports legacy systems. I think there's an answer in Web Components for all of those things, including legacy support up to a point with polyfills, but it wasn't cohesive nor easy to roll things.
I think with lots of tool support, perhaps a compiled language targeting the Web Components spec, it could be just as easy to roll Web Components as React Components.
(Speaking as someone who worked on the Polymer team for a year, and has been writing React at a separate job for roughly half that)
> The componenty composability of React definitely seems real to me, whereas I don't have such optimism about Web Components.
Web components have pretty much the same levels of composability as React, but have promise for even more. The main reason I say this is that the components specs attempt to formalize a surface area for all web webcomponents libraries to interoperate with each other - though we're not to that point yet.
Many of the Polymer core components are a great example of composed elements (sometimes overzealously so). Shadow DOM and insertion points (<content> or <slot>) do wonders for this. (Yes, Polymer 1.0+ backs off from this a little bit)
React really only composes well with elements written in React. It can talk out to native components (and custom elements), but it's awkward enough that you tend to build wrappers to expose a more React-y interface (value, onChange, etc).
---
> I just want an inline iframe, which I know is like saying I want an "inline inline frame". ;)
Assuming that you want these inline frames on a per component basis, we're talking about a ton of overhead per component:
* A new JS VM context (for the sandboxed global and security mechanisms)
* A new render layer (and the the synchronized painting you mention)
* All the other browser bits that would need to be built up
---
DOM is a problem, but it's not the biggest problem, and performance-wise, there have been tremendous improvements in the last few years (Try running your benchmarks again).
However, imagine a world where all elements are defined with a set of public APIs like web components. No more crazy special cased properties, magic methods, etc. Imagine how much cruft the browser vendors could remove from the DOM. Imagine how much faster things would be.
Imagine all of HTML implemented purely in JS, and the browser only exposes the primitives needed to do that.
A very interesting perspective -- that I happen to agree with. There's also nothing preventing you from using web components from React.
> Imagine all of HTML implemented purely in JS, and the browser only exposes the primitives needed to do that.
It would be interesting to see if that's even remotely feasible with the current "limitations" of Shadow CSS and/or how Shadow CSS would have to be extended to support this.
TL;DR: Shady DOM provides an API that is similar to shadow DOM, but without all the crazy overhead when polyfilling it. The downside is that it's not quite transparent to someone outside of a Polymer element
I don't want to partition the page the way frames/frameset did, I want to sandbox code and content the way iframe does, just in a more lightweight fashion. If I could couple that sandboxing with a component factory library like React, that could really be something else.
Note that this is not supported in any browser, and Chrome support was first implemented and then removed over two years ago, so it seems unlikely to ever be supported. Also a seamless iframe is not CSS-isolated, something that the GP said they wanted.
I have sometimes thought about experimenting with rolling BLOB URL's for HTML, CSS, and JS and using that to initialize an iframe. I think that would work. One of the things I'd like to do is be able to build a frame sandbox using assets already loaded by the parent page.
I believe iframes can be very costly for the browser to manage, but I don't understand all the reasons why that would be - perhaps something to deal with the cross-domain security concerns.
It'd just be nice to have a stripped-down same-domain version of iframe that could layout nicely as a rectangle into the parent page, but not inherit styles nor Javascript namespace from the parent page.
The idea of having multiple threads contributing to the DOM that builds the page really intrigues me.
There's a proposed thing for HTML5+ called CanvasProxy, for example. The idea would be that you could detach a Canvas from the DOM thread and give it to a Web Worker. It could get a Canvas context in the worker from the proxy, and all draw commands would execute against the Canvas' pixel buffer in the worker. Meanwhile, back in the DOM thread, the actual canvas tag still lives on, just in detached mode, but you can still set CSS styles on that tag. You just can't get access to the Canvas context.
It sounds rad, and would make a big difference to canvases that represent heavy computations or frequent draws. No browser vendor has attempted to implement CanvasProxy however, it sounds like it is a challenge to keep it actually performant, and/or there are GPU threading issues?
At first I was gung ho about web components. I was so excited I used Mozilla's X-Tag for one project and Polymer in another project. I am now no longer in favor of using them and it makes me sad, actually.
Web components are simply too heavy. They add latency to download (yeah you can combine them but I've never found tooling (including Vulcanize) that worked very well). Also the shadow dom's separation makes styling more difficult across multiple components requiring awkward shadow dom hacks in the CSS and / or duplicating / adding more CSS to components themselves. Same with using JavaScript and third party libraries not expecting to use the ShadowDom though this isn't so bad.
When I'm working on something that needs to be optimized for low bandwidth / high latency I go as lightweight as possible and I concat as much stuff together as I can. I want web components to work so badly but they're way too slow for me to use anytime soon.
"Too heavy" seems like it should be qualified—maybe you mean they are "too heavy to use for widget-level components in a web app with the current implementations"?
That's a weaker claim, and leads to the question of what they might be good for right now. I imagine for example Stripe's payment widget could be a pretty nice use case? In other words, higher-level widgets that are substantially encapsulated already, and are already often loaded as remote resources.
But I've only looked a little at the W3 specs and haven't tried using them in reality.
> "Too heavy" seems like it should be qualified—maybe you mean they are "too heavy to use for widget-level components in a web app with the current implementations"?
Current implementations yes but when browsers have 100% integration? I want to say yes but would love to say no. My biggest issues (latency, trying to concat and minify down and duplicative CSS) are not really addressed anywhere but until all browsers have the full implementation I can't say for sure I'll admit.
> I imagine for example Stripe's payment widget could be a pretty nice use case? In other words, higher-level widgets that are substantially encapsulated already, and are already often loaded as remote resources.
That seems like a great use case. I'd prefer, and probably most others, to host my own components versus externally referencing them but in some cases where you can't really get around it (I'm not familiar with Stripe's payment widget but Google Maps is certainly like this) I could see that being a good use case. Honestly that in and of itself may justify its existence but I'm just not convinced of its use fullness outside of that just yet.
> What's wrong with vulcanize (combines all your components into one single html file)?
Most of the issues I ran into were relating to inconsistent pathing. It was difficult to get every type of thing that could use a path (image tags, css, html inclusions, etc) working in a way that worked at both the path of the component and the newly created path when combined. Perhaps this has changed since the last I used Vulcanize but it was very painful at the time.
I also had a hell of a time getting the google maps, and a few other google components, to Vulcanize with the rest of my components but I honestly can't remember the issues now.
> Polymer now offers Shared styles, so that you don't have to duplicate your css
That's Polymer specific though; is there a way to do that with native web components? I could be missing something but it looks specific to Polymer. Regardless I'm glad this is being addressed to a degree but the solution seemed somewhat unintuitive as it's not using CSS semantics for driving the style.
Essentially the majority of projects I work on have developers and designers on them. The designers are usually pretty good with CSS, Illustrator and Photoshop so anything outside of CSS requires quite a bit of learning for them so that's a hill I have to battle with techniques such as these.
> Get ready for HTTP2 in a few years.
Just like I'm not going to be able to use ES6 for many years I'm not holding my breath here :)
Web Components are defined in script, they add no more download costs than any other script or styles.
Shadow DOM's style scoping is very intentional, and brings sanity to styling. Cross-scope styling is possible via CSS custom properties in a much more principled way than letting styles leak all over the entire document.
ShadowDOM removes the need for a component user and a component author to collaborate in order for styles to be encapsulated. This, in turn, enables easier distribution and usage of generalized components.
Exactly, how many times do you want to use a component but find it near impossible because it's written with a framework, or uses CSS styles that conflict with your own, or is written for use with a different version of the very framework you're using... this is why we keep re-inventing the same dumb web components over and over again.
I don't think that's a fair comparison. Why would you need so many selectors in the 'regular' case and what are they doing? I've worked on some pretty complex apps and have never even come close to 200 selectors in total; probably not even 100 but I'm not 100% sure.
Shadowdom allows for specific targeting within a component; is that necessary faster than a query using another element / class name for targeting a component in "regular" usage? I'm curious on the numbers here as I don't know that you're necessary right or wrong here.
I had the exact same thoughts when I first read about web components, or rather HTML imports in particular. The extra requests and latency introduced are not worth the benefits of having pluggable components. I can already imagine people writing server side code to include these "web components" inline into the static markup sent to browser, which somewhat defeats the purpose of having these components be dynamically interpreted by the browser.
Or maybe the correct way of using web components would be to create component "rollups" on server side, to reduce the latency effect of the extra requests. Or the browser might start caching common components between page requests.
Maybe the current web component spec is just the first step to get the reusable web started...
HTTP2. In the HTTP2 world, extra requests don't add any additional latency. The browser multiplexes them over a single connection. I think I'd heard that it even uses a common compression dictionary for all requests, which means that boilerplate code that's duplicated between components will get compressed away.
>In the HTTP2 world, extra requests don't add any additional latency.
They don't add any additional TCP connection latency, sure. I think it's a bit unrealistic to claim there won't be any congestion of the multiplexed stream which causes additional latency.
>I think I'd heard that it even uses a common compression dictionary for all requests, which means that boilerplate code that's duplicated between components will get compressed away.
I may be misremembering, but I thought that was just for request headers?
What extra requests? Are there any requests you get with Web Components that you don't with React, Angular, jQuery or plain JavaScript?
Web Components are just a set of APIs and capabilities in the browser. They don't imply anything about the number and type of files that are used to create your app.
I use webcomponents (specifically Polymer stuff), and all I do is just roll together all the components I'm going to need for an app in one big concat-and-minified file. You don't have to fetch a gazillion components, any more than you have to fetch a gazillion .js files - cat, minify, gzip works on webcomponents as well. Plus, as other people mentioned, http2 solves this problem as well.
http2 may help you with that. Your tooling could also potentially inline the components too, you don't have to make an http request for each one. That fact that Vulcanize didn't work too well for you for whatever reason seems like something that shouldn't be terribly difficult to fix.
> That fact that Vulcanize didn't work too well for you for whatever reason seems like something that shouldn't be terribly difficult to fix.
Maybe? I'm not sure. The vast majority of issues I had with Vulcanize stemmed from pathing changes.
So say you're working on a component that exists at /components/mycomponent/mycomponent.html
When you're referencing JavaScript, images, CSS, etc you typically assume you're at / but it doesn't work like that for everything. So you either have to setup a configurations so you have a universal prefix to use for everything or do everything relative to root (which at least in the environments I deployed into not really doable as the root could change). So the vulcanizer needs to understand and reroute everything within your component and since there is no way to override the way the browser fetches content you're left with trying to figure out everything to include and doing it via ajax (if you wanted to be automated about it).
I'm not saying it's impossible but it seems difficult to do to the point where I don't know that that's the best path for web development to go down.
I honestly can't remember some of the other issues I've run into. Mostly JavaScript errors from third party components and even other Polymer components but it's been about 6 months or so since I've used it. I'm sure Vulcanize has gotten better by now and I know Polymer has.
HTML Imports are going away. I'm not sure the state of Polymer but my guess is that they will be removed eventually, so I wouldn't depend on a tool like Vulcanize being improved; it's a dead feature.
Instead use a normal JavaScript module loader to load components and concat them the usual way. Don't use imports.
HTML imports allow you to bundle HTML, CSS, and Script and import them in one step. ES6 only handles the script part. We very much love HTML Imports on the Polymer team and they aren't going anywhere for us.
I expect that when the ES6 modules loader spec is finalized it will be fully compatible with the underlying semantics of HTML imports, and they will work quite nicely together.
There are several competing proposals for asset imports [1] any of which are viable. HTML Imports may persist, be supplanted by ES6 imports, or may otherwise survive in a different but similar form. Hell, I wouldn't be surprised if one day someone makes a web component to implement the current HTML Import spec using whatever the browser happens to support.
The other terrible side effect of web components is that it gives the standards group an excuse to never update HTML anymore. "Why do we need HTML6 when you can make anything you want with big, heavy Javascript-based web component that takes expert programmers to make?" Indeed.
Just say no to web components. Say yes to upgrading HTML specs directly.
That's the exact opposite of the reason why browser vendors are implementing web components. It's so that they can get direct, real-world feedback from actual users of a feature before baking it into standards and making it impossible to remove from the web. The intent has always been that the most frequently used webcomponents will end up getting baked into the browser, and then into the HTML spec.
We used to write web specs speculatively, without having feedback from real-world use, and it was a disaster. Take a look at the list of W3C standards:
The intent has always been that the most frequently used webcomponents will end up getting baked into the browser, and then into the HTML spec.
That's nice in theory, but what's happening in reality is that no component is going to be standardized, and HTML remains stagnant for another 15 years.
It's a little hard to make that argument when web components themselves aren't standardized yet, and are fully supported natively in only one of the major browsers.
There's pretty ample evidence that common webdeveloper behavior does make it into the spec, eg. JQuery => querySelectorAll, Javascript animations => CSS transitions & animations, long-polling => websockets, common layouts => <header>/<footer>/<main>/<nav> elements, dropdowns => <details>/<summary> elements, the addition of date/datetime/color/number/range/tel input types, etc. If you're still coding HTML like you did in 2000, you are now very obsolete.
Sorry, what? The browser has sorely needed extension and componentization capabilities for many years. Why should the W3C be the gate-keepers of what's allowed in HTML? That only leads to the invention of heavy-weight and balkanized UI frameworks. We can do much better than that.
> why the generic UI elements in Polymer, like Menu, Dialog Boxes, etc.., isn't part of HTML itself?
Because they shouldn't be. HTML should not be responsible for providing every possible UI element. In fact, I would argue that provides _far_ too many UI elements, and should instead provide low level user input APIs that can be used to build interactive elements.
And so web developers have to write basic UI widgets just to get basic UI elements. Or, to use a heavyweight component library that needs needs to be downloaded.
Thanks but no thanks.
HTML should provide a rich, declarative UI API by default. We shouldn't have to hunt down separate components to get that.
If you have to load Javascript, you have already lost.
A well regarded standard UI library, say jointly developed by browser vendors, could be CDN served and cached or even bundled with browsers eliminating the download cost with the massive advantage that it can be versioned independently of HTML and browsers and is optional in case a better library comes along.
> A well regarded standard UI library, say jointly developed by browser vendors,
This is commonly known as the web browser.
lol @ browser vendors deploying UI components via Javascript, when they could just as easily deploy them in the web browser itself as part of upgraded HTML specs.
I'm pretty excited about what this will do for web components development. Polyfills have come very close, but deeply nested components can become a tangled mess in no time.
If anyone is interested in getting started with web components, I recommend taking a look at Google's Polymer[1] library. It's a fairly opinionated approach, but has a relatively small learning curve.
I rejigged my personal website using Polymer and was thoroughly impressed. It's pretty heavyweight, mostly because I'm too cheap to precompile and minify my Javascript, but the modular approach means that I can now, finally, apply actual software engineering principles to building websites.
The documentation could be better, and you pretty much have to like Material Design, and there could be a richer set of standard widgets available, and it desperately needs a CDN, but other than that I found it really pleasant to use.
What I haven't tried is building any kind of reactive interface. It's got some support for binding elements to data, but I didn't explore that much. Anyone care to comment?
We (tonicdev.com) experimented with both Polymer and React before settling on React. Doing things "the right way" in Polymer/WebComponents was just a mess. It did in fact remind me of the "old way" of doing things in Cocoa: AKA, accounting for every edge case with every appendChild or DOM mutation that could take place on your component. Having tried React's "API-less" approach of just re-rendering what you want given the params/sub-components, it was an absolute time sink that just offered no benefits.
You'll see in the answer the complexity around dealing with children that come and go (vs in React you'd actually write it declaratively once and be down with it -- kind of interesting that Polymer has so much "templaty" stuff and is ultimately less declarative than the pure JS approach React).
Another classic approach I've seen is to create an "<x-app>" element which contains the entire site, which also handles the various application state and binding tasks. The Polymer Starter Kit[1] follows this approach, and it also includes tools for building/minification using Vulcanize[2] and gulp[3].
Separation of concerns and better variable scoping as well. In a webcomonent inside a shadow dom, you can do something like input id="mainInput" or some other generic-named DOM id, without worrying that some other third-party library or component will create another node with the same id.
Similarly for CSS selectors and class names - I mean, BEM and SMACCS and stuff are cool and all, but it's also nice to know that my CSS styles won't leak outside the component, so I can just do div class="main" or something and not worry about how other developers name their classes on other branches of the DOM.
No, I've never touched React (mostly because it seems to require precompilation). I've been meaning to check it out at some point but the barrier of having to set up a Javascript toolchain has, so far, been too high to be worth it.
Before the Polymer rewrite the trivial amount of interactive stuff on my site was raw jquery.
FYI, React doesn't require any precompilation. You only need to include a JavaScript file. The JSX stuff is entirely optional, although almost everyone uses it. I share your fear and loathing of JavaScript compilation toolchains, and I successfully avoided all of them on a medium-sized React project. After we had a good working prototype, we started using some more tooling, but we kept it extremely simple to start with.
1. JSX tags simply compile to calls like React.createElement("a", { href: url }, ["content"]) and if you just create a short alias for that function, you don't need JSX at all.
2. If you're anything like me, things like Flux scare you by being weirdly ideological and requiring unnecessary boilerplate. You can ignore it and just use a single object to store all your state, according to the "keep it simple, stupid" philosophy. That said, I would recommend that you look at the Redux library, which is a simple and rational approach to the nebulous "Flux" concept. Redux is simple—it's just a certain design pattern that makes sense in the React context.
3. I find React.addons.update to be the most obvious and simple way of doing immutable updates to nested structures—which is what most React programs are doing all the time. People have come up with various libraries for more sophisticated immutable data structures, but you're unlikely to need them, and React.addons.update gives you 90% of the bang.
Next time I have a bit of non-stressed free time, I'd like to make a repository with an example React application that's extremely simple, requiring no tooling and as much as possible avoiding "lockin" to various newfangled libraries and paradigms...
I shared your fear of toolchains, but with Webpack, Babel, and Hot Module Replacement, I've been won-over.
Those tools make it painless to sculpt out a UI in real-time, using the latest ES sugars and a sane module system. Webpack has Uglify built-in, so you can target a minified file for production.
With Babel and JSX, you're still writing idiomatic JS, so you don't have the same lock-in problems you might with something like CoffeeScript. In my mind, it's a clear value-add with no observed downside.
Yeah, those things are nice and compelling. I've used them, often happily! Some of the new language features are hard to live without once you're used to them; I especially like async/await and the destructuring syntax.
The flip side is simply that it's more technology, more stuff to learn, more stuff to keep running on your computers, and more potential bugs and strange interactions to run into.
I don't like to talk about it in a negative way, because it can sound like I'm disparaging the technology. But, there are nice things about having your project be written in the actual language that the browser already supports. Like simply never having to think about transpilers.
Of course once you've learned about tools like Webpack and Babel, you can use them to your heart's content if they make you more productive. I just wanted to emphasize to the person I replied to that you don't NEED any of them. Especially not to just get started.
You don't need minifying until there's a problem with your page load time and until you actually know that minifying will help with that -- until then, it's strictly speaking a premature optimization. (Writing less complex code may be more important!) And so on.
And "no observed downside" is not strictly true if you consider the cost of added complexity.
For someone who is overwhelmed by infrastructural proliferation, it should be nice to hear that you can develop actual applications with just plain JavaScript and maybe a small Makefile.
By "no observed downside", I meant once the toolchain is set up. Now, I get Hot Module Replacement and JS Harmony sugars for free, without having to worry about which browsers natively support which new features.
Your point about perceived complexity/intimidation is a good one, but for me, removing cross browser inconsistencies simplifies development more than Webpack complicates it.
There's always a cost to complexity, and it's usually in interop. What happens if you want to share a quick technology demo with a non-technical person over e-mail? What if you want your business cofounder to be able to tweak CSS by herself, and not have to learn what "setting up a toolchain" means? What if you're leading a team (or worse, a department or company) of people who aren't familiar with the latest in the JS world, need to do minor tweaks to the frontend, but can't justify learning a whole stack of technologies for something they spend 20% of their time in? All of these are real scenarios from my professional career.
I like Babel too and the bulk of my new development is in ES6, but ubiquity is a really nice feature for development tools, probably the nicest of all.
JSX isn't mandatory, so you can just write plain old JS if you want. And if you want to use JSX but don't want to use a full-blown toolchain for development, there is an in-browser JSX transform library[0] that you can include to do the compilation on the client side.
You know what, the barrier is fucking shit, that's true. I had issues myself and I'm not sure I am doing it right.
However don't knock it off because of the precompilation stuff. Mostly because I'd like a nice comparison by someone, heh :)
I personally hate bindings now, apart for using them with forms, but React offers a mixin for that. What I am interested is if Polymer is worth it or not.
If you like a discrete control flow, then Polymer is a valid approach. TBH, if your application isn't beyond moderately complex, it doesn't matter too much which framework you choose as long as you have relatively clean separation of logic and control flows.
React (with a flux-like control flow library, such as Redux) has a higher cognitive load to start. Where it shines is as additional features are added, there is very little additional complexity. Where with Polymer (or Angular) your application as will see a curve of additional complexity as features are added.
I started a pretty basic setup for React + Webpack with hot reloading[1]. I'm going to add in Redux next, then Router... At which point I'm planning on adding in material-ui[2]. From there, I'll try to keep it updated or forked to use as a base application.
I started it off by following te SurviveJS book[3], but my direction is a bit different. I'd also recommend reading the full stack redux tutorial[4].
I've been using Polymer for a few months now after following the WebComponent spec for quite a while, and I'm really liking what I'm seeing.
While it can admittedly lend itself to a very Java Swing-feeling development flow, that's a breath of fresh air compared to the traditional mess of web code I usually end up with.
WebComponents in general (and Polymer as a particular library on top of them) I feel are going to be a 10x force multiplier for web development.
Are they using it for Inbox, too? The memory use on that page. Yikes. To think I used to browse the web on 64MB of RAM without ever feeling memory-constrained. Sometimes with my e-mail client also open!
I just checked. My Inbox is has no mail in it (in the part I have open anyway, which is the... inbox). It is using 1.01GB of memory, 858.4MB compressed.
Double-u. Tee. Eff.
Docs.google.com, with a spreadsheet open? 617/585MB. Still entirely unacceptable, but lower than an empty Inbox. The difference between the two should be more than enough to run both, even with a comically generous bloat allowance.
(I'd written then deleted something snarky about Google notoriously rigorous hiring practices really shines when we look at software that results, but seriously, all those smart people and this is what comes out? All their web software is embarrassingly bloated. Chrome's battery drain versus Safari? Android, which seems to take Do the Worst Thing that could Possibly Work as its guiding principle? What is going on in there? Is it some sort of organizational problem?)
The products have gotten bloated, slow, unusable on older devices, and the quality of the actual service has gotten worse, too.
This is just... impossible to use.
Google already is the new Microsoft, and Chrome the new IE. With this, they’re also losing any other reputation they had left.
Google products once used to be about the tiniest and fastest solution, working everywhere, and using the least possible resources, without any bullshit features.
It might also be interested in taking a look at Aurelia (http://aurelia.io). That is a fairly new framework (still in active development, it hasn't hit 1.0 yet) which is heavily influenced by web components and seems to implement them quite well. When using their bundling gulp task I can have a full components using app that only requires a couple of HTTP requests to load.
That depends on what you are doing inside of your web components. You can load any JavaScript you want in there, you know. If you want to use React, that's fine. Web Components is just a set of extensions to the DOM standards to allow encapsulated custom elements.
I was wondering about that when I read this post this morning.
The interface of a React component is very small: it's effectively just a render function with some lifecycle hooks and an internal setState method. Depending on your data management philosophy, you can even scrap state altogether and leave just the render function:
var View = (props) => <div>{ props.content } </div>;
You could probably write a Polymer-esque library that let you create Web Components with a similar API. I haven't looked into the implementation details of react-{dom, canvas, -native}, but I imagine that's effectively what they do - bridge from that API to the platform's native one. I expect that if Web Components are ever faster or otherwise better than the legacy DOM, such a shim will emerge to make WC as easy to write as React components.
- How does it affect performance, if say, a large application is composed of many shadow dom elements, each containing a large amount of redundant CSS?
- Will there be a way to let certain css styles "leak" through while encapsulating others? Or is any type of native inheritance gone?
Performance will be better with Shadow DOM. The styles are deduplicated (in Chrome at least, and I believe Firefox too), and the style boundary adds opportunities for optimization.
There will be ways for styles to leak. The Polymer team has added a polyfill for CSS Custom Properties and CSS Mixins (the @apply rule [1]).
Custom Properties cross shadow boundaries, and @apply lets you define properties sets that are applied later down the cascade. This lets components define very targets sets of custom properties that they apply to specific elements within their shadow root.
Because a template would define the styles once, but be stamped multiple times, there should be minimal redundant copies of the styles themselves. How this performs will remain to be seen, as it would be up to the browsers to implement the spec optimally.
You can "pierce" the shadow DOM with the ">>>" combinator (previously /deep/), allowing you to style elements from outside of the shadow DOM. This is discouraged in the case of web components, however, as a component is supposed to operate as a black box entity. Instead, a component can expose APIs for styling, such as with CSS variables: https://www.polymer-project.org/1.0/docs/devguide/styling.ht...
Edit: This article[1] has details on performance and styling options.
I'm sure Chrome/Opera and Firefox will tweak their implementations to match the updated spec soon, then all that's left will be Edge, and they've expressed intent to implement!! https://dev.modern.ie/platform/status/shadowdom
I'm extremely excited about this. I have been playing with stock Web Components for the last year and a half or so. Polymer I have been hot and cold for. I prefer to learn the standard to start. Overall I am very happy with this development.
Agreed. While Polymer is a neat project, I honestly think it's kind of hurt the adoption of web components. I'm sure I'm not the only one who's tried to learn about web components according to the standards, only to run into a ton of Polymer stuff on the web and get turned off by the heavy, opinionated, framework-like nature of it. The underlying web components APIs are much less off-putting to me, once you can dredge them up through the SEO swamp.
Polymer is really not that opinionated. All they do is provide sugar that you would eventually write yourself after you get comfortable with web components and start to notice where the boilerplate lives.
It's a "oh, that's neat" feature visible only to developers, but it is not enabling anything groundbreaking for the end users, so it isn't making webapps more competitive with native. We still need iframes for 3rd party embeddable components, and for 1st party components we have good enough solutions, and the tendency seems to be abstracting the DOM away.
Web Components keep taking a lot of spec and implementation effort that could be spent on something with a bigger impact (ServiceWorkers in iOS? Transactional DOM that doesn't jank/layout-trash? Activities/Intents/Scopes for native-like sharing?)
A solid platform gets better gradually, it may not excite you but it's an important step in the right direction.
Indirectly it does impact the users, because polyfills for browsers that do not support shadow dom have performance issues, not counting the time developers spend maintaining them.
Microsoft has publicly announced that they're implementing Shadow Dom for Edge.
Chrome is implementing Shadow Dom v1 now, an experimental version is in Canary I believe. The currently shipping version has a different model for distributed children, but we're actively moving to the newly agreed upon spec now.
"Stop writing frameworks" by Joe Gregorio [OSCON 2015] is a nice intro to html imports, shadowdom and other aspects of web components, although his knock on frameworks goes too far for me, especially since some frameworks can use web components as well.
https://www.youtube.com/watch?v=GMWAHzXQnNM
I have always been skeptical about Shadow DOM. It tries to solve different problems under one solution.
Say you want a static page without scripts but with CSS encapsulation. You cannot achieve that. Instead you have to use Shadow DOM and get it in package with other unneeded things.
CSS encapsulation could be solved with special attribute, new HTML tag or even with new CSS rule, and it would have been more flexible than Shadow DOM is.
I think the main reason webcomponents won out over scoped styles is because usually once you need to scope your styles, you find you also need to scope your querySelector calls, and your DOM traversals, and your IDs....and pretty soon you have shadow DOM. It's unlikely that you need style namespacing if you're just a single dev working on a static page, and it's unlikely that you won't need JS & DOM scoping if your product grows beyond that.
Compose all the things! It's definitely reassuring to see that this is the attitude of the people who are building the standards for the future of the web.
Really like this as a React dev. Recently there is a trend to use inline CSS to do CSS modularization in React, but I feel it's not really a clean approach. With Shadow DOM you can mount the whole React App onto a Shadow root and have perfect CSS encapsulation. Hopefully we can see Shadow DOM supported by major vendors' stable channel soon.
Shadow DOM is not the solution for CSS isolation for components. You'd have to mount every single component into its own shadow DOM which seems unnecessarily complex (and most likely has significant perf implications).
No I wouldn't mount each component into a Shadow DOM. For example if I have a React App with 2 major parts developed by 2 teams, I would just mount them onto 2 Shadow DOMs so each team don't need to worry about CSS classes collisions or whatever else that's on the webpage.
There are some key features of template that hiding it would not accomplish, namely that the contents like scripts are guaranteed not to execute until they've been cloned/stamped.
React's virtual DOM is an implementation detail - its real power is in how easy it makes it to write composable components. If Web Components are ever a faster render target than the DOM, you can trust that ReactDOM will be upgraded to comply (in the same way that Ember aped React's virtual DOM as Glimmer).
That said, I have a feeling Web Components add complexity (and weight) to the render path. I doubt they will be faster than DOM diffing in the near term (although with any new web tech, there's always the possibility that they'll make available performance optimizations that can't be supported in the traditional DOM).
I agree there will probably be a performance lag for a while, but long-term I think the WC model wins out because it brings encapsulation and reuse into the formal suite of W3C specs.
That's not a big deal for any one web app, but a huge deal across web development in general.
General note to people engaging in technological conversations:
When posting strong statements, please consider including explanations, arguments, and maybe some definitions. This helps other people understand your claims, and is a big part of what makes "reason" such a powerful thing.
They're fantastic for a lot of purposes. I've been experimenting with web-components for a couple of months now. It's not really "prime-time" ready, IMHO, but it's cool to see how quickly it's progressing now that Mozilla is more on board [1] and Microsoft's Edge is implementing core features [2a] and supporting WC's generally [2b]. (PS Vote on the MS's Edge website for implementing WC features! Seems they actually look at the votes some to determine features).
One of the biggest hangups for my team has been Firefox/Mozilla opting not to enable HTML imports (it's implemented) which causes quite a bit of annoyance when developing and using piece-meal web-components. They list there reasons [3] but their perspective seems to be driven from a "Javascript" first mentality with HTML being second class. My code is all driven by web-compoents though. The difficulty comes from getting the WC polyfills to load before Firefox tries to load your dom-modules (in Polymer terms). I want to separate out my HTML into modules and not require loading polyfills before I can include static HTML pages that might not even have Javascript in them.
There will probably be a lot of resistance, or rather, misuse of web-components for a while unless web-developers start switching to a "data first" mentality which is gaining traction from React and ClojureScript. It also feels pretty very heavy-weight if you're using Polymer currently.
BTW, anyone know how the shadow DOM affects performance of the entire browser window? Mainly, I'd be curious to know if updates to a shadow-DOM is isolated from causing Light DOM updates (e.g. updates to the shadow DOM could be isolated to generally not affect the parent window, or could be batched). This would yield much of the benefits of React's Virtual DOM. Just not sure where to look in the Webkit docs to find where this would be documented.
Shadow DOM, in particular, provides a lightweight encapsulation for DOM trees by allowing a creation of a parallel tree on an element called a “shadow tree” that replaces the rendering of the element without modifying the underlying DOM tree.
It's similar in some ways, but not exactly the same. This allows an element on a page to contain a whole tree of elements while still appearing as a single element from the outside (i.e. for the purposes of DOM methods, CSS, etc.).
No, the virtual DOM and the Shadow DOM are distinct concepts.
The virtual DOM is a parallel representation of the DOM which is used to diff against—it's only used for "logic" rather than display.
The Shadow DOM is actually almost the opposite: it replaces the visual/interactive presentation of the tree without changing the exposure of the underlying elements to the rest of the page.
That being said, they bother offer mechanisms for writing modularized frontend code effectively.
React does not use Shadow Dom. And React is not related to CSS first-hand, though people are trying apply the same modularization principles of DOM encapsulation to the CSSOM.
Not only that - there's a lot of implication that come with Shadow Dom that will make it impossible to just plug and play with React. Namely, the fact that is stops the cascading effect of Cascading Style Sheets. Whereever you are stylizing large swathes of your app with common css code, you'll need to rethink by either treating css as a module and importing at the new shadow root, or by rewriting your css to be more redundant.
These tradeoffs require considerable buy-in, which leads me to suspect React will continue to be agnostic towards shadow dom.
I just think the Shadow DOM and Web Components are too complex to work with as designed. The componenty composability of React definitely seems real to me, whereas I don't have such optimism about Web Components.
All I would want/need is a lightweight version of iframe whereby it functions like a javascript security and css styles sandbox, call it a subwindow. A subwindow could load html/css/js independently from the parent page like a Web Worker does, or it could borrow from assets the parent page has loaded and its HTML could be inlined as innerHTML statically/dynamically from the parent page (at creation time.)
A subwindow would have an independent global/window object from the parent window. And, you could postMessage/onMessage communicate with the parent window if you needed. It could be constructed with its own dedicated DOM thread, or it could schedule on the parent's thread if you didn't want another thread spawned. Paints would have to be synchronized between the DOM threads, which could be a bummer.
I just want an inline iframe, which I know is like saying I want an "inline inline frame". ;)