Hacker News new | past | comments | ask | show | jobs | submit login
Introducing Custom Elements (webkit.org)
231 points by snake_case on Nov 19, 2016 | hide | past | web | favorite | 111 comments



What's the point in introducing another syntactical feature to HTML (the markup language) when you need Javascript for it anyway? When you're already using Javascript, custom elements don't add any essential functionality.

Their inclusion as a feature into HTML feels like a not-so-well thought-out decision from a language design PoV.


Custom Elements (also Shadow DOM) is a low level API that helps explain how an author can define elements that have aesthetic and functional parity with built-in HTML elements.

Custom Elements provides a lot of useful features, including: compatibility with "native" DOM APIs (e.g., appendChild); lifecycle callbacks (e.g., notification of being attached or detached); consistent timing for content distribution; the ability to "lazily" upgrade in place.

These features are generally useful regardless of the type of framework or style you may wish to use to write a component. They form a common baseline that moves the web towards a place where components (simply HTML custom elements) authored by different people do not need to be aware of each other's finer implementation details in order to be interoperable.


Seems very Java-y to me. Like a Everything Inherits From Element kind of philosophy.

Historically, one of the core values of browser design is to just provide the minimum possible hooks to access functionality, and to let library authors experiment widely with different APIs and interfaces on top of that. That's why there is so much innovation in things like application lifecycle, layout engines, look-and-feel, etc, on the web.

I mean, I get the idea: you can define these custom elements, and then have a bunch of "raw" HTML files that don't use JavaScript and have access to this advanced widgetry.

But boy, that's adding a weird layer of confusion. Now those files are "HTML"ish, but can't actually render without this other JavaScript going on.

You sacrifice a clear definition of what HTML is.

It reminds me of ES6 actually... everyone rushed to add all of this syntactic sugar and comfort features to the language, but now there is no definition of JavaScript anymore. It's just some vague container for some set of possible language properties that may or may not be available on various platforms.

I think people underestimate how powerful the web philosophy of "just add the basics, and let people build powerful inferfaces in the application layer" is.

We have Rubyists and other folks from pro-oriented languages coming in and trying to turn HTML+JavaScript into a professional programming environment, with all of the chaos and insanity that comes along with that: versioning, build systems, package management, etc. They really don't understand the power of having a single runtime that works everywhere and doesn't do much.

Separation of concerns is powerful, and the simplicity of the browser runtime I think provides a helpful constraint which is worth the frustration.

The Ember people are near the heart of it. They recently dropped support for Node, making a hard fork where they're only supporting IO.js developers now (franchise naming rights aside). But that attitude is everywhere now. The React crowd is all-in on transpilation. The Webpack people have ditched the idea that there should be some observable relationship between what's in the browser and what's on the filesystem.

I don't know. I guess I'm just sad that the JavaScript+HTML community have been taken over by people who don't really seem to like JavaScript+HTML much. And not enough time has passed for those of us who actually like the web to start our equivalent of record stores and classic car clubs.

Old man rant over I suppose.


The logical extreme of your argument is to make everything a `div` with classes.

While possible, the main issue with that model is that it provides no encapsulation strategy. Encapsulation is one of the core tools for programmers for building complex programs, so having first class support for that in the templating language suddenly makes a lot of things easier.

Without encapsulation, a lot of things become more frustrating. Event bubbling has to be micro-managed. Dev tools are less useful. It becomes even more frustrating to deal with conflicting CSS IDs.

Steele's Growing A Language[0] talk specifically points to encapsulation as the core building blog for programs. Given how HTML is used nowadays (React + Angular are both "HTML as program layouts" models), not having encapsulation is a crutch

[0]: https://www.youtube.com/watch?v=_ahvzDzKdB0


For me the logical extreme is "use the most basic control structures you can get away with". And it's easy (and useful) to use `a` tags and `span` tags and even `p` and `h1` tags every now and then. Also, `canvas` is quite useful, and `input` not to mention `script`, `link`, ...

Actually, now that I think about it, there are lots of tags I use that aren't `div`. Writing the shims to reimplement all of those is exactly counter to what I am suggesting. I am saying, use what's there unless you have a really really good reason.

> Without encapsulation, a lot of things become more frustrating.

I'm not talking about encapsulated vs. raw, I'm talking about two things mostly:

1) declarative vs imperative.

2) bundled frameworks vs single purpose tools

Angular is a huge monolith that one must boot and then understand, because the way it does work for you is by having you declare things correctly on the filesystem and in magical scopes.

Jquery is bundled, but it's imperative.

Webpack is declarative, but it's not bundled.

I have no problem with tooling I just think the way we are designing tools is going entirely in the wrong direction.

The general question to ask is: how many systems do I need to understand in order to be able to describe what this tool I am using will do?

If the tool is Rails, the answer is "there is no end to the number of systems you need to understand in order to be able to describe what this Rails app will do".

If the tool is a PHP binary, the answer is much narrower.


> I guess I'm just sad that the JavaScript+HTML community have been taken over by people who don't really seem to like JavaScript+HTML much.

Why? Why do you care so much about a particular technology? 'Liking Javascript' is such a useless distraction - instead, spend time building great products with the best tools available. Javascript is just an implementation detail of what you're building.

I choose things like React, Webpack, CSS Modules and the rest because they make it easy to quickly build non-trivial projects with a team of people. We were able to build a better product quicker because of these technologies. If something else comes around that's 'normal regular javascript' (whatever that actually means) that lets us develop just as fluently, I'll happily use that.

At the end of the day, I'm here to build products. Who cares if it uses 'real' Javascript or not.


> Why? Why do you care so much about a particular technology?

It's caring about your own neighbourhood. For better or worse, some technologies affect everything we deal with daily. The web is one of them - I don't get to code Common Lisp in the browser, I have to do HTML + CSS + JavaScript, and therefore the way it gets increasingly shitty over time affects me every time I interact with a website. And web becoming a primary code distribution platform these days also affects me whenever I want to make something other people will use, because I'm forced to deal with those particular technologies.


The Web isn't just for churning out products, it's being used as communication medium by billions every day.

While I kindof like web components and will probably embrace them, I think the uneasiness comes from WHATWG et. al. piling stuff in to browsers which are already monstrosities in terms of functionalities and LOC, especially when there's no new essential functionality gained.

I had hoped WHATWG's work would result in a compact runtime description (as much as possible) of what a user agent is supposed to do, for web developers to build on. But all that's happened really is that Chrome/webkit (which was in an entrepreneurial position at the beginning, too) has gained dominance in the process, and there's less of a chance than there ever was that a new browser runtime could be developed from scratch.

Rather than spec'ing out core functionalities (personally, I had hoped for a langsec profile of Javascript, and a formal semantics for CSS), WHATWG is significantly extending the scope of the spec with web components, and soon, WebAss.


> I guess I'm just sad that the JavaScript+HTML community have been taken over by people who don't really seem to like JavaScript+HTML much.

I'm inclined to agree, although, to be honest, it doesn't help that JavaScript and HTML really suck for developing applications. And you can't really blame them, because they weren't designed for that. We happened to shoehorn them into these roles because their portability and accessibility were so convenient, rather than focusing on building something better. So, this is where we are.

However, having got my start with Web development back in 2003, I very much remember a time when Web development was simpler. When jQuery came out is when everything started going downhill. That's when everyone decided that serving a buttload of third-party JS with your payload was an acceptable thing to do, and it's only gotten worse since then. And it's probably tumblr's fault or perhaps Reddit's fault that putting your text inside of an image became an acceptable thing to do.

At some point, every developer cared about how much of the system's resources their apps were leeching. Then, as resources became more and more plentiful, developers stopped caring, and now we have the occasional blog posts, like "How I Reduced the Number of Servers We Need", when someone rediscovers the lost art of being aware of your resource consumption.

Wirth's Law is probably my biggest disappointment with computing.


> I guess I'm just sad that the JavaScript+HTML community have been taken over by people who don't really seem to like JavaScript+HTML much.

I'd always hoped there would be a sensible way to incorporate other languages into the browser natively, like Python (I'm aware of things like Skulpt, but they're not Python Python, they're JS Python). It would make sense, given the ease of installing new languages on systems these days. However, I get that it would be a major PITA keeping all the languages synced up with all the development going on in the browser.

Alas. Thus endeth my old man rant.


Hopefully WebAssembly will help to alleviate this somewhat.

Back in the day, JavaScript didn't have so much of a monopoly on, as we called it back then, "DHTML". IE could also do VBscript, and there was a way to do Tcl in various browsers, too. And with Flash, back in the ActionScript 2 days, you could use the vaporware-to-be ECMAscript 4, which was rather different from either ES3 or ES5, and, imo, it was a shame that it was abandoned. Java applets, Silverlight. These things wound up with a poor security record, but I think something like the PPAPI introduced with Chrome helps to mitigate that security risk by A LOT, since plugins are run in their own processes and sandboxed.

While I'm trying to be optimistic about WebAssembly, I can't help but think that a PPAPI/PNaCl-based solution would be ideal, along with a better UX for installing and managing such plugins.


> I'd always hoped there would be a sensible way to incorporate other languages into the browser natively, like Python

The funny thing is, I support the crap out of that. I'm really excited about Web Assembly, which seems like the path to that.

See, that's a perfect example of how the web platform is supposed to evolve: It's a genuinely new feature that allows you to do things in the browser you couldn't do before.

Shadow DOM is another example of a new tech that allows basic access to a fundamentally new power. It can't come soon enough.

I'm not saying the web shouldn't evolve. I just think people who are trying to turn it into Ruby are not helping.


It would also be a security nightmare, which is one reason why browsers don't really bother.


Could you provide some specific example of something that can be easily done with custom elements that cannot be easily done with other existing APIs? (I'm speaking about user-facing stuff, not some implementation detail.)

Edit:

Here is my quick re-implementation of their example using today's APIs: https://jsfiddle.net/pd0na425/3/

The HTML instantiation uses a single tag, just like theirs. Fully reusable. Works in all modern browsers. I believe is uses less code too.


Your example has many serious limitations:

(1) It doesn't let you have multiple progress bars that all advance at their own rate.

(2) It doesn't work at all for custom-progress-bar elements added dynamically to the document after the timers for the first progress bar finish.

(3) It makes the children of <custom-progress-bar> actually appear in the DOM, which means that code generically walking the DOM will see them and might mess them up.

The basic thing that the combo of Shadow DOM + Custom Elements provides is encapsulation. You can make standalone elements that operate themselves, work just like normal elements, handle attributes and children, and start by themselves. Anything done using encapsulation can be done without encapsulation, in a sense. What encapsulation provides is better organization for your program, and more elegant and understandable code at the point of use.


> (1) It doesn't let you have multiple progress bars that all advance at their own rate.

It does.

> (2) It doesn't work at all for custom-progress-bar elements added dynamically to the document after the timers for the first progress bar finish.

It does.

I think you're confusing the "library" code with the "user-land" code that drives it (the latter was copied from the example in the original article). Here is a version with two progress bars where I separated the "library" into a separate box:

https://jsfiddle.net/pd0na425/4/

(3) It makes the children of <custom-progress-bar> actually appear in the DOM, which means that code generically walking the DOM will see them and might mess them up.

This part is true. However, if you have code that generically walks the DOM and screws stuff up, how do you expect your website to operate anyway? The real issue is overly broad CSS selectors, and it's not really something specific to custom elements. Everyone has to deal with it for all CSS and all elements. In this case, it's actually easy to mitigate. Since all the inner elements are custom, I can easily add a prefix that will effectively prevent them from being styled by other libraries. That said, I wish W3C actually solved CSS scoping properly. That would be way, way more useful than any component API and it would be useful for every website in that uses styling.


You've solved one set of problems (and fixed the major problem that the original had no separation between library and user land code) and created some new problems:

(1) A timer will fire forever, even if no progress bars are currently updating. Bye bye, battery life!

(2) A progress bar can't update at 60fps, even if changes to the underlying value happen that quickly.

IF you continue playing whack-a-mole with these types of problems, you might see why Custom Elements + Shadow DOM are an elegant way to avoid these issues while providing good encapsulation.

(As someone else mentioned, Shadow DOM provides style scoping so it doesn't have the CSS issue you raised, even if the contents are normal built-in elements, while your example does.)

I hope this helps to clarify what Web Components technologies are good for.


>You've solved one set of problems (and fixed the major problem that the original had no separation between library and user land code)

My original followed the structure outlined in the article's example. It runs the exact same library code as the second link. I did not "fix" anything, since nothing was broken in the first place. I simply split it for clarity.

> A timer will fire forever, even if no progress bars are currently updating. Bye bye, battery life!

You claim regarding battery life is unsubstantiated. Simple timers like that take up very little processor time as long as you don't do superfluous DOM updates in every invocation.

> A progress bar can't update at 60fps, even if changes to the underlying value happen that quickly.

Firstly, it can. If you set the timer to 16ms and there is nothing else loading up the browser engine.

Secondly, if you think that your typical events always updates the page graphics at 60FPS, you are mistaken.

But most importantly, you really don't want to update a control like this 60 times per second, because you would be wasting CPU cycles on re-calculating values that no human will ever notice.


> You claim regarding battery life is unsubstantiated. Simple timers like that take up very little processor time as long as you don't do superfluous DOM updates in every invocation.

That seems intuitively plausible, but for modern CPUs, it is very wrong. To give a specific example: modern Intel CPUs have a wide range of power levels including idle states called C-states. C-states are very low power, but somewhat expensive to enter and leave; while in a C-state, a CPU core can't really do anything besides wake up to a higher state. A frequent timer, even if it does almost nothing, stops the CPU from being idle long enough to stay in a C state profitably. Thus, frequent timers that do almost nothing can hurt your battery life more than sharp but very infrequent spikes in CPU usage.

My claim to expertise: I was heavily involved in optimizing Safari and WebKit for battery life. Most of what we had to do was not reducing average CPU usage, but rather preventing gratuitous wake from idle, or where not possible, coalescing repeated wakeups so they happened at exactly the same time instead of slightly offset. We actually measured this and it is a major effect. In fact, tuning this way is one of the major reasons that browsing in Safari delivers longer battery life than browsing with Chrome on the same hardware. I am going to bet you did not measure and your claim is based on speculation.

On mobile devices it's somewhat different. The screen and radio are a much bigger portion of the total power envelope so the CPU doesn't matter quite as much. But as mobile CPUs continue to get more powerful, maintaining good idle hygiene will become more important there too.

Short version, and something all web developers (and native developers should know) should know: idle timers are extremely costly in terms of battery life, when your app or webpage seems looks idle, it should actually be idle!

> Secondly, if you think that your typical events always updates the page graphics at 60FPS, you are mistaken. > But most importantly, you really don't want to update a control like this 60 times per second, because you would be wasting CPU cycles on re-calculating values that no human will ever notice.

When the process driving the progress bar is making material progress, it is both feasible and desirable to update the progress bar within one frame.


Since I don't know what exactly you measured, how you did it, on which devices, and for what use cases, I cannot comment on specifics of what you said.

I tested various variations of this approach for extreme cases (large pages, many timers, complex CSS queries). The visual performance and CPU utilization were much, much better than I initially expected. My testing for mobile was less thorough, but I didn't notice anything horrible going on with the battery, precisely because of what you said: the browser-based drain was negligible compared to drain by screen, network and background services. I admit neglecting to test this on unplugged laptops. I will eventually get to it, but I don't think the results will be as bad as you make them sound.

Regardless, notice how the conversation shifted from "this is impossible" to "this doesn't scale" to "this might be not very battery efficient in some cases". I consider it premature to dismiss this approach without knowing exact costs for battery use in real-life browsing scenarios. (After all, I'm not the only one on the web using timers, and some other things many websites do today slaughter both battery and visual performance, and people continue using them without thinking twice.)

The approach has merit. In fact, it has a lot of merits that aren't obvious from such a simple example. The true benefit of timers is that they illiminate many issues related to events and inter-component interaction, thus drastically cutting down on code complexity in more dynamic, more complex pages. Custom elements are not going to address that any more than ASP components did in their heyday.


The CSS in a web component is scoped.


So you have to run a querySelectorAll() across the whole document every time you want to update/upgrade your elements? That's not going to scale at all.


You should actually try it before making such statements. It scales well enough. Also, we could use getElementsByTagName instead of querySelectorAll. But that's not important.

The point is that there are approaches to handling "components" with current APIs. They are simple, they work across all browsers right now and they are good enough for 99% of websites out there.


I'm using it to build wrappers around d3-based graphs to make it easy for non-software engineers (mechanical, electrical), to create and modify simple PID controls. There's PLC programs and interfaces but by and large they're klunky, aren't customizable, or require learning something g like LabView.

The custom elements help specifically by encapsulating the (somewhat tedious) d3 graphing and controls connections into simple HTML objects with a simple spec the engineers can lookup. HTML is then declarative and easy for non-programmers to try, test and see immediate feedback. This works out well for creating functional controls systems which can then be cleaned up by experienced developers who can tune the underlying web components.

JavaScript libraries all require much more knowledge and generally don't "scale" when you want to add several graphs quickly. Frameworks like Angular2, Vie, or React all require significant learning curve and understanding of the framework generally to get working. Though Vue seems to be the simplest, but web components provide an API which Vue et al can build on by encapsulating more complex HTML/CSS/JS into semi-hidden sub-trees.


The combination of Custom Elements and Shadow DOM allow you to create a separate, semantic representation of your markup that you can script against, while encapsulating the actual markup required for presentation.

E.g., imagine you wanted to select all the images in a carousel. Without Shadow DOM, something like document.querySelectorAll('.carousel img') might pick up images being used as implementation details, like left/right button images, in addition to the carousel images themselves. With Custom Elements and Shadow DOM, your markup could contain only the content you intend to represent, and document.querySelectorAll('.carousel img') would return just the images being shown in the carousel.

Of course, like image carousels, whether or not this is a good idea is subjective. But it does give you additional functionality. Hopefully we'll learn how to use it responsibly.


Wait, what? You just described the benefits of Shadow DOM. Shadow DOM has benefits, but you could get those benefits without Custom Elements, right?

What can I do with Custom Elements + Shadow DOM that I can't do with Shadow DOM alone?


You can create reusable UI widgets that are usable without the user knowing how they were written. e.g. with today's frameworks you need to know how to create widget, and that may involve rolling up your sleeves and diving into a bunch of framework-specific concepts. With custom elements you could just have your server emit an <image-carousel> element, or do document.createElement('image-carousel').

This is good for ease of use, but it's also great for interop. Custom elements work with every way of writing a web site because every way of writing a web site can deal with elements.


I suppose the ship has sailed a while ago on this, considering various projects I've had over the last couple years but:

You can create reusable UI widgets that are usable without the user knowing how they were written with an element like <div class="FrameworkName_imageCarousel" ...> also, and your client library handles FrameworkName_imageCarousel without you needing to know how that is done.

I guess where document.createElement is concerned you might need to do framework.create('image-carousel'); or something similar, but I suspect if you are programmatically creating your custom elements you are probably also going to have to do more with them than just create them, and at that point you are going to need to know what code you are expected to call to do whatever things you want to do.

When you say it's also great for interop I worry we have a different definition of interop - I don't call it interop when my way of writing a web site can deal with elements, I call it interop when other people's ways of consuming websites can deal with elements and I have a hard time to understand how interop is going to be great for consumption of non-standardized elements (robots etc. included under other people's ways of consuming websites)

My feeling is really that a benefit to this that does not seem to get discussed is the ability to limit 'interop' that the solution builder does not explicitly control with its implementation (except where the big players are concerned, who have the resources to overcome piddling limits like these). So no more easy crawler solutions, you want to deal with crawling our site and you're not google? Well you will have to reverse engineer our <asio-link> element and figure out how it works to crawl it.

Even if you automate a browser you still have to have code to click on asio-links to get the script implementing that element to run!

But like I say that ship has pretty much sailed anyway, this kind of thing just officially validates the direction the ship is headed. (and I'm not happy about the direction)


> You can create reusable UI widgets that are usable without the user knowing how they were written with an element like > <div class="FrameworkName_imageCarousel" ...> also, and your > client library handles FrameworkName_imageCarousel without you needing to know how that is done.

With all frameworks I know of, this won't work for elements dynamically added to the DOM after the start of your application. Not only document.createElement, but even if you insert chunks of markup using innerHTML or the like.

Sometimes there is a framework-specific call to do the right thing, but then you're back to needing to know how an element is implemented.

For Custom Elements, it really is as simple as just inserting them. You don't have to run any script code to make them do what they do.


Ok thanks, now I see a benefit - not sure if I consider it enough of a benefit for creating what I would consider to be awful interop problems but I can see a benefit.


But in reality, the server would have to emit

  <image-carousel delay="10" images="['...', '...']" end-behavior="cycle">

Thus making us no more free from the details of the component than we were before.


> no more free from the details

i suppose it depends on what you mean by "the details."

details can be just the inputs required by "the interface" or it can be specifics of "the implementation". the purpose of an interface is often to distance (perhaps even "free", depending) the user from the implementation.


Parameters aren't implementation details.

Implementation details are stuff like "It's written in framework X so you need to call that framework's init method after document load, then do frameworkX.initCarousel(document.querySelectorAll('.image-carousel')).onEnd((carousel) => carousel.cycle())".


Distribute a widget that others can use, or that you can use more easily in other contexts.


What? You can distribute widgets with JavaScript (and Custom Elements require JS). So what was the benefit, again?


This one is easier and more universal than the other approaches. There's not setup involved like:

   $("#foo").myWidget();
You just put the markup on the page and include the JavaScript:

  <my-widget></my-widget>
And of course, you could do querySelector before that existed but that doesn't mean there wasn't benefit to it being native.


I get that use case, but is a slightly shorter query selector a good trade-off in exchange for introducing syntax that crawlers, screen readers, and other non-browser HTTP clients can't deal with?

Also, I don't know about the semantic markup bit. "Semantic" markup (whatever that is) was the idea already when CSS was introduced. Now we need Shadow DOM, ie. additional markup, to make CSS manageable, which seems odd to me.

The "semantics" of HTML is really just that it's a structured document format with paragraphs, headers, lists, links, and a couple other idioms with a more or less tightly controlled inner structure. Next to these, custom elements seem out of place, and more like a general mechanism for page composition.

As such, they're bound to evolve over time. For example, for web components it's not difficult to imagine that a "container" component (for aggregation of other components, say a tab control as a trivial example) would be a useful thing to have. But then this calls for a way to place web components into the child content of other content, something which I understand isn't possible right now. Basically, we're looking into a long journey to morph HTML into something else here.


This is possible today using slots.


I'd say the big idea is to make using and reusing of those elements more declarative. With the Shadow DOM, you can also hide all the internal clutter, which is extremely convenient if you use some widget in many different places.

It's true that you can achieve something similar with templates (or partials), but those are tied to a specific platform or stack.

The <template> tag is also quite neat since the (inert) subtree is created immediately and then cloned when instantiated. This is quite a bit faster than creating HTML strings which are then parsed and finally turned into elements.


My sentiment exactly. The thing they show in "live demo" can be done using existing DOM APIs, with less code.

Shadow DOM seems useful as far as isolation of styling is concerned, but it's really weird that it's not part of normal CSS spec. CSS in general needs some form of isolation. People resort to stuff like BEM to work around its absence.


Do custom elements allow re-templating?

It's pretty powerful to be able to separate the visual presentation of a control (i.e., markup) from the logic. Every determinate progress bar has 0..1 progress and current progress properties, but the visual representation needs to be able to vary widely in order for these elements to be truly reusable. And it would be nice to be able to change the template (including adding/removing DOM elements to it) without having to define a subclass in JS.


Custom Elements don't specify how they render at all - they're only about creating instance of specific HTMLElement subclasses during browser operations like parsing, document.creatElement(), cloneNode(), etc., and firing "reactions" like connectedCallback() and attributeChangedCallback(). Once created, Custom Elements can really do anything to the document, there's not built-in concept of templating.

Generally though, Custom Elements are going to create a shadow root and render some DOM into it. It's all just code, so an element subclass can use a template library, and can do things like you want, like decouple its template from its implementation.

Here's a sketch of an element that would use a template as determined by a 'template' attribute.

    class MyElement extends HTMLElement {
      constructor() {
        super();
        this.attachShadow({mode: 'open'});
      }

      render() {
        this.template.render(this.shadowRoot);
      }

      attributeChangedCallback(name, oldValue, newValue) {
        if (name === 'template') {
          this.template = templateLib.lookup(newValue);
          this.render();
        }
      }
    }
And you'd use it like this:

    <my-element template="progress-bar"><my-element>


Use composition. Extract the logic out of your custom element into another class (ie.: Progress). That can parent any compatible presentational custom element (i.e.: ProgressText, ProgressBar)


How do we employ progressive enhancement using custom elements? Given their reliance on JavaScript, as an old-school front-end developer, this feels like an unfortunate oversight.


I've been working on one fun solution to this: server-side rendering of web components - https://github.com/pimterry/server-components.

The idea is much the same as server-rendering anything else: you give the components a first run to generate HTML server-side, send that first, and then let JS progressively enhance interactivity on top in the client if it can, or just stick with the initially rendered version if it can't.

(Server components is currently still on the V0 version of the spec, and this article is talking about V1, but they're very similar and we're migrating at the moment).


I don't see how this could possibly work even a little bit without Javascript. What would you fall back to? A static hunk of HTML? That could only work in the very specific case of a custom control that happens to have exactly the same utility on the page as an existing HTML element. And even then, you'd need to add a templating language, so you could express how to render that HTML chunk with a certain value or text. This templating language would not be Javascript, since you're trying to avoid that. So we add a new templating language to HTML for the benefit of rendering content for people who turn off Javascript?


I completely agree, but the question remains what experience do we provide when JavaScript is not available? (Progressive enhancement isn't just for "people who turn off JavaScript", it's an defensive approach to providing inclusive, universal solutions on the web. Something I feel is lacking in the modern, front-end developer's worldview.)


You're back to the case of having one static page and then having your javascript enhance things as you go. You'll be stuck replacing elements with your JS. I still think it's a great way to think about how we render on things like mobile. Ever get those slow-loading pages where the elements jump all over the place? Or half-way through reading an article the pop-up add finally renders? We need to build those into the baseline.

Fortunately, we can reliably depend on javascript now. What we cannot rely on is internet/processing speed.

I've always found progressive enhancement to work best as an additive process. You want to define the functionality of your page and then add enhancements and verify that the base functionality still holds. If you start from the most complex behavior it is a lot harder to reason about how removing functionality might hurt the page.


How would you do a progress bar without JavaScript? Well, you wouldn't. So custom elements don't affect progressive enhancement as far as I can see. They enhance the DOM when JavaScript is available and when its not they are essential a div.


Why isn't there a skinnable native progress-bar widget?


lol it's 2016, we ignore people who have javascript turned off


All this is over-engineered IMO.

It is just enough to add support of behavior property to CSS:

      my-toggler {
         display:block;
         behavior: MyToggler url(my-components.js);
      }
As soon as browser will see <my-toggler /> in markup it will call function MyToggler() with 'this' set to the element in file my-components.js. That's what I did in my Sciter[1]. It was proven as quite useful and easily understandable yet implementable in 100 lines or so of native code.

The value of shadow DOM is also quite doubtful to be honest. It is shadowed from whom and for what purpose? User don't care about DOM structure ... Programmers? They are OK with that.

[1] http://sciter.com


> All this is over-engineered IMO.

It's complicated in order to provide a lower-level API so that libraries can take advantage of different features and provide more opinionated ways of creating custom elements. Polymer is an example of a library which provides a simpler (but slightly more opinionated) API on top of custom elements.

> The value of shadow DOM is also quite doubtful to be honest. It is shadowed from whom and for what purpose? User don't care about DOM structure ... Programmers? They are OK with that.

The Shadow DOM provides style isolation, which is convenient to developers mixing-and-matching components, and can also be a performance boost as browsers can optimize for the fact that styles don't cross the shadow boundary. It also provides a bit more isolation for when you are writing code to discourage components from reaching into each others' private implementation.


> The Shadow DOM provides style isolation ...

I've proposed once @ www-styles concept of style sets: the @set construct.

Style sets mechanism solves two principal problems of CSS:

1. They allow to define local and independent style systems for elements in the same DOM tree.

2. Style sets allow to reduce computational complexity of style resolution. In classic CSS all style rules that used by the document are placed in the single ordered table. Task of finding list of matching selectors for the element has computational complexity near O(n), where n is a number of rules in the table. And style resolution of the whole DOM tree is O(n * m * d) task. Where m is a number of DOM elements and d is an average depth of the tree. Style sets allow to reduce significantly n multiplier in this formula.

http://sciter.com/css-extensions-in-h-smile-engine-part-i-st...

> to provide a lower-level API ...

I haven't seen any requests from framework/platform developers.

In contrary, the feature to call particular function when particular element appears in the DOM was actual from the time of first framework appearance. People were trying to solve it with DOM mutation events but they did not go through - too expensive, etc.


Past attempts at component models for the web have used CSS as the hook instead of a script call. For example, Mozilla's XBL and Microsoft's HTC. This way of doing it is a security hazard because it makes CSS into active code instead of just passive styling. Many sites had problems where they were willing to include CSS from a third-party source for user customization, and ended up with XSS security holes.

If you do it carefully and get all the details right, you end up with something like Custom Elements and Shadow DOM, both of which are ultimately not that complicated.


> to include CSS from a third-party source for user customization, and ended up with XSS security holes.

So custom components cannot be included from third party sources? If they can then this is exactly the same problem.

And in general idea of including third-party CSS from source you cannot control is even worse. Same thing is about JS.

That `behavior` thing can use 'same origin' rule as we have now for any other JS code.

The behavior that defines name of a function (in any language) that can setup the element in various ways: e.g. it can be used with React, Angular, Vue, you name it.


> So custom components cannot be included from third party sources? If they can then this is exactly the same problem.

They can, but you can separately decide whether to support components from arbitrary off-site sources (most sites wouldn't) vs accepting styling for arbitrary off-site sources.

> That `behavior` thing can use 'same origin' rule as we have now for any other JS code.

There is no same-origin restriction for JS included directly with the <script> tag. Most websites are not designed to expect the <stye> tag to be a backdoor <script> tag. If you mean to invent some novel security restrictions on JS running in the same frame, trust me, it will end up more complex than Custom Elements.


> It is just enough to add support of behavior property to CSS:

That's been done before: that's how XBL works in Gecko.

Having now worked with it as both API consumer and API implementor for a while (going on 15 years now), attaching behavior via CSS like that has some really nasty aspects to it. For a start, if you create a "my-toggler" element via createElement, it won't have the behavior attached (because style doesn't get resolved for elements outside the DOM). And if you insert it into the DOM inside a display:none subtree it won't have the behavior, becase style resolution is optimized out inside display:none subtrees. There are various other issues; I don't really have time to write them all out right now.

But suffice it to say that this way of attaching behavior via CSS has been considered, implemented, shipped, used extensively, and found wanting.


Some custom elements I worked on this week: https://mrdoob.github.io/model-tag/


Not supported in Safari, on Chrome (MacOS) I get model-gltf.js:19758 THREE.WebGLRenderer: Error creating WebGL context, and a bunch of other errors.


What's the difference between this and Polymer?

It seems to be somewhat similar.


Polymer just makes it easier to create Custom Elements. It mostly adds templating and data binding, which aren't built-in concepts in the Web Components specs. Every Polymer element is a custom elements, and they can freely mix with other custom elements.

Custom Elements and Shadow DOM being implemented native in Safari means that Polymer is just much smaller and faster on Safari.


Polymer is an opinionated library for creating Web Components. It comes with the required polyfills to make it work in today's browsers.

Performance is better with browsers which support it natively.


For one, this is integrated into the browser. No dependencies required.


I have the same question, and where does either fit into the other bits and pieces like Angular and React?


The hope is that people will be able to make custom tags that work with any UI framework, so you could use them with Angular or React or no framework at all.


Exactly! It would be interesting to hear what the creators had in mind when they built this.


Maybe it's not obvious from the WebKit article: Custom Elements aren't new, their native implementation in WebKit is what's being announced here. Custom Elements are part of the Web Components collection of cross-browser standards introduced in 2011 (and still not finalized). React was in its infancy at the time, Angular was just getting off the ground. Polymer has always been built on top of the Web Components standard(s). This post isn't about "Yet another new framework competitor!"


Polymer adds some syntactic sugar over this. This means that polymer will get faster on WebKit because it won't need the polyfill.


What is the state of Blink getting upstream webkit changes? How much time would this take to land in Chrome development?


Blink doesn't get any upstream changes AFAIK. In fact it's moving much faster than WebKit and got custom elements back in 2014 with version 33.

http://caniuse.com/#search=custom%20elements


It's already in Chrome stable. Open your de tools and type `customElements`


Wow, since I went back to school recently I've lost touch with the HTML/JS news, so I've been completely unaware of browsers starting to integrate these features. Will this mean less dependence on React in the near future, or am I missing something?


I guess this is more of a shadow dom question, but what's the efficiency of the CSS with this pattern? It seems like styling is being inserted into every instance. Is there a browser feature to avoid internally duplicating it?


Chrome internally dedupes <style> tags with identical text, and typically the style tag is generated by cloning an existing one from a template, so there's very little actual overhead here.


Makes sense. Thanks.


I am very excited about this! I recently got re-introduced to the Polymer project. What they are doing is definitely the future of front end dev.


Not in MS Edge because it makes the browser more competitive with MS software frameworks that have had similar features for decades.


All four features of the Web Components Spec (Shadow DOM, Custom Elements, HTML Import and Template Element) are the top most requested features in the MSEdge Developer Uservoice. <template> has already been implemented while the remaining three are on their backlog.

https://wpdev.uservoice.com/forums/257854-microsoft-edge-dev...


What happened with separating content and presentation. I thought everybody got over the jquery madness and went on with template based frameworks like react and angular. Who would write code like this in 2016?

    this._label = shadowRoot.querySelector('.label');
    this._label.textContent = newPercentage + '%';


This looks like it's for the custom elements part of the web components spec. Doing HTML templates is already supported in most browsers [0]. I think they were just trying to show how to create custom HTML tags and left out other parts because it would be outside the scope of the article.

webcomponents.org has tons more info about web components.

I think it's pretty awesome to get templating, custom elements, scoped styling, and HTML imports as a browser feature without JS frameworks. It'll be neat to have as an option in the future when browser support gets better and people start using it.

[0]http://caniuse.com/#feat=template


Html <template> doesn't do data binding as far as I can see? It's just a way to reinstantiate fragments of a page and then you are back to selectors and setting properties with js. When I say templates I mean declaratively defining the page with data binding in the html so the javascript code doesn't have to know anything about the html.

There are two major reasons to use angular/react: rendering with data binding and reusable components. If web components is going to be a feasible replacement it has to both, when looking at it now it seems as it only does the component-part. Otherwise we will just end up with tons of web components that pull in angular as a dependency and we are back on square one. In fact it will probably be worse than before because it's so easy to use third party web components that you will not even think about all the different dependencies they pull in and at the end of the day your page will require doenload of both angular, jquery, react, polymer and vue. This is one step forward and two steps back. Unless web components can significantly improve the performance and code size of all these existing frameworks under the hood.


Custom elements are not related to templating. I suppose they did it this way because they were highlighting custom elements and not some templating thing.


Maybe the whole thing here is that we are moving to a component based way to develop UIs with incapsulated and reusable elements but this somehow collides with the foundation of the www i.e. separation of concerns.

React, a tool with a very sharp focus (help Facebook to scale), choose to get rid of the whole concern; W3C is trying to solve this with Web Components (Custom Elements are part of them) since 2011 (!) and specs are still in progress and changing from time to time because getting things right is difficult when those concern are at stake.


It is a bit of an anachronism, yeah. Soon, though, we'll have template literals [1], and this example will just be out of date.

[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


<template> elements are also implemented everywhere but IE: http://caniuse.com/#feat=template

The advantage of <template> over strings in JS is that the browser can parse the <template> as HTML in parallel with parsing your JS. Less JS to parse means faster cold loads.


New elements? That's what XHTML does since 1999


XHTML doesn't give you a way to automatically give behavior and composite structure to a new element. It just gives you a way to put one in your markup without the validator complaining. But it won't do anything by default.


What will this actually be useful for?


A dependency-less, component-based alternative to React.

It's part of Web Components: http://webcomponents.org


Parts of React. For me the state management is a core part of React, and as far as I can tell this doesn't cover that, just the rendering part.


This seems almost exactly like all of React. These components do have state, just like a React component. If you're talking about how the state is changed, then now you're not talking about React. You're talking about redux, flex, etc. You may very well want something like that here too, but that's not because this isn't like React, it's because it is like React.


A big part of React (and arguably the whole point of React), is the way in which state changes are handled and re-rendered. A React version of this progress bar element would simply have a render method, that when the state changed would simply be called again for another full-re-render of the element. The fact that this grabs individual elements and mutates them is about as far from React-style programming as you can get. Virtual DOM, tree-diffing, all of this is around a central concept that "immediate mode" programming reduces visual bugs, and is completely absent from custom elements. I'm not saying Custom Element are wrong, but they are absolutely different from React in a core and essential way.


Where's the part of React which allows me to return a new copy of what I want the HTML to look like, while only updating the changed elements of my view instead of totally blowing away the existing elements (the virtual DOM + reconcilitation part of React)? Also, where's the declarative event binding part?

I'd also point out that the state tree (setState) is a part of React, and plenty of apps are just fine being built using only setState for state management (no redux/flux/mobx).


You're right, custom elements don't do that, and it's not a goal of the custom elements spec (as far as I can tell) to promote any particular programming style. It's a low level spec. So showing the most low-level way to use it makes sense.

Here's a library that combines custom elements with VDOM rendering: https://github.com/skatejs/skatejs



What part of React "state management" do you find indispensable?


I don't see the reason for down-votes here guys. It's a legitimate question, isn't it? I was hoping the recipient would understand - once they try to enumerate "the parts" - that it's them managing the state, and that centralized state management and unidirectional data flow are obtainable in Backbone, Custom Elements, or vanilla JavaScript for that matter.


Just speculating, but they have picked up on the Socratic method aspect. Some find that condescending. If that's actually your goal (as indicated by "I was hoping the recipient would understand"), it's not so much a legitimate question that you want to know the answer to, is it? With your explanation, it's more of a rhetorical device.

Personally, I didn't read it that way (nor did I down vote you), but given your follow-up, I can see how some might.


Ah yes that's what I was wondering. So Custom Elements are actually part of the Web Components standard, and it looks like it was already implemented in Chrome and Opera, works with a flag under Firefox and so now it has landed in Webkit / Safari, right? Besides Edge, and safari's support for HTML imports, it looks like Web Components implementation is slowly getting there. What about uses? Are there any popular webcomponents out there yet?


I just watched the talks of Google IO and Polymer Summit, Polymer has a lot of big name users (Coca Cola, net-a-porte have no clue what this is, EA etc)


There are examples in this very article.


Great news, only thing missing now is HTML imports


This is a great idea, but how do you get around the fact that it's not valid HTML? Is that even a concern anymore?


Custom HTML tags are valid HTML. Note the requirements for custom tags to always include a '-' in their names. This distinguishes custom tags from native ones. https://html.spec.whatwg.org/#valid-custom-element-name


Will the browser be able to display them without requiring JS?

If no, then it can't really be argued to be valid HTML by itself.


> Will the browser be able to display them without requiring JS?

Without JS (which registers the element), custom element tags will be like any other unknown tag. It's like a span unless the CSS says otherwise.

The behavior will be missing, however.


they wont display what the javascript code tells them to display, but rather fallback to a <div /> mode.


Yes, the article specifically states this.


The article says "without relying on a JS framework.", which is different than not requiring JS. It does not work when Javascript is disabled.


Well if the specs get implemented (which they're in the process of) this _is_ valid HTML. Any browser that does not implement these specs just reads them as unknown elements and does nothing.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: