Performance is not great.
Lack of coherent data architecture (e.g. Redux).
Difficult to manage state change (as opposed to React) due to a more limited programming model - e.g. template ifs and repeats don't know to reevaluate in many cases. This makes it harder to integrate with third party js libraries.
Inferior tooling vs React.
Philosophically Polymer is more about the DOM and React is more JS-focused. Some of this is personal preference.
What does that mean, exactly? Do you have any resources you could link to?
I found that once you have web components, there's nothing stopping you from representing your entire webapp's UI in terms of semantic elements only. For example, a HN comment thread could look something like this:
title="Ask HN: What are the cons of Google Polymer?"
<hn-comment author="sshumaker" posted="20160509T14:12:00">
Disclaimer: I led development of gaming.youtube.com...
<hn-comment author="justjco" posted="20160509T14:40:00">
I found you don't really need Redux if you follow the "DOM is your API" paradigm....
<hn-comment author="torgoguys" posted="20160509T14:56:00">
You answered the question asked, but I'm also curious as to what you liked about it for that project.
When you do it this way, your DOM becomes basically isomorphic to what you would write in your Redux state or send in a JSON response. You could, in fact, write a trivial helper to convert between JSON and DOM, but there isn't really a need when you could just send HTML down the wire.
That said I believe Redux is not so much about representing the state tree as it is about making its mutations centralized and predictable (the same actions, replayed, will always produce the same state). While a componentized tree often helps ensure this, it is less of a case when the state shape itself doesn’t neatly match the rendered component tree. This is the use case where Redux, in my opinion, offers some options.
If you're doing webcomponents right (i.e. adhering to the Law of Demeter, not handing out references to internal shadow/shady-DOM nodes, defining an API for possible child components), then you naturally get centralized & predictable mutations as well. The methods & child nodes of a top-level component become the API by which events can mutate the page, and then any state changes propagate down to the leaves of the component tree only via well-defined APIs.
> What does that mean, exactly? Do you have any resources you could link to?
Check out the "Thinking in Polymer" talk from Polymer Summit 2015: https://www.youtube.com/watch?v=ZDjiUmx51y8
Specifically talks about "the DOM is your API" at 0:49 --> https://youtu.be/ZDjiUmx51y8?t=50
disclaimer: technical writer for polymer
It was vastly preferable to server-side template rendering (how YouTube is traditionally built). :)
Generally well-built components with support for material design patterns.
Polymer team was responsive to issues and quick to address feedback.
Plus a bunch of non-technical reasons like developer mindshare within YouTube etc.
This wasn't true in Polymer 0.6 but required Object.observe which is SLOW.
React's render everything and diff approach avoids this issue.
Polymer is made of magic. It uses magic cutting-edge browser features where available, and where they aren't it fakes them using magic. Some of its APIs like Polymer.dom(x) are completely magic. When it works, it works really well. When it doesn't you are going to waste so much time trying to find out why.
It's possible to extract js-code from the vulcanized html file to the separate js-file using the crisper command, than run the uglifyjs command with appropriate minification arguments.
vulcanize --inline-css --inline-scripts ./index.html | crisper -h ./build/index.html -j ./build/script.js
uglifyjs -m -c warnings=false -o ./script.js --source-map=./script.js.map ./script.js
A few commenters mentioned poor browser support, but we haven't experienced any problems other than IE<=10.
In many ways Polymer is just a shim for the WebComponents spec with data binding added in, along with a standard library of web components. The resulting framework-style is basically plain-JS/HTML with a la carte use of Web Components where appropriate. Unlike [my perception of] React or Angular, you can use Polymer a bit without going all in. (Ironically the one thing you can't currently do is mix Polymer Elements from Source A with Polymer Elements from Source B at runtime if they have common, but separately hosted, dependencies).
WebComponents feel a bit like Java swing, in that making HelloWorld is high overhead, but once you've got a nice toolbox of components going, you can pull them out and use them flexibly. This is not unlike React components, except Polymer/WebComponents use an HTML-centric definition format while React uses a JS-centric definition format.
For what it’s worth, you can integrate React in your existing Backbone/Ember/Angular/etc app one component at a time. At my previous company, we have been doing this over the course of a year while shipping new features thanks to the interactivity React enabled.
Ryan Florence gave a talk about integrating React into an existing app: https://www.youtube.com/watch?v=BF58ZJ1ZQxY
That means without shimming other libraries, polymer is incompatible with any other libraries you might use to manipulate the DOM. For example, you can't easily mix angular, react, or ember templates with some polymer elements, because (e.g.,) angular's ng-if directive doesn't use polymer.dom to inject created nodes.
This is currently my biggest beef with polymer. Someday, webcomponents will be great, principally due to composability and portability. Just plug the one component you need in to your extant work. But for now, that promise is not quite realized.
Polymer can seamlessly switch between Shadow DOM and Shady DOM (though your own elements needs to be tested under both). You can still use the Shadow DOM polyfill is compatibility is a higher concern than raw performance.
Safari and Chrome will have native Shadow DOM v1 implementations by the end of the year, and quite possibly Firefox. If we have enough resources, we may decided to make an IE/Edge specific Shadow DOM polyfill that'll be much faster because it'll path DOM prototypes rather than wrap DOM nodes. Wrapping is necessary because Safari doesn't properly let you patch Node prototypes. If this happens, we'll have fast and compatible DOM APIs.
But I still am _super_ excited about the future of the webcomponents world. I mean, if nothing else, having well-namespaced element ids (if I have two editor components on the page, I can't just grab "#dateinput") will be great. I love the idea of directly attaching properties to DOM nodes, encapsulating templates, and projecting content into viewports. It's seriously rad. I am 100% on board with the notion, it's just the polyfill in the meantime that's problematical.
Also it is over-complicated for simple websites. You'll end up in bower-npm-grunt-etc-etc hell very quickly.
FWIW, that's because there hasn't been a reasonably stable, agreed upon collection of specs for us to implement. Custom Element only reached multi-vendor agreement a few months ago (https://annevankesteren.nl/2016/02/custom-elements-no-longer...).
Polymer is a useful library, but it only represents a single vendor's vision, not any sort of standardized specification.
One thing that's still frustrating is the lack of html rel import in Firefox. Any word if that's ever going to change? Http2 changes the performance constraints quite a bit so the previous decision not to support native html imports means you have to use vulcanize and you can't dynamically mix and match sources.
Right now we have no intention of shipping HTML Imports; it's easy to polyfill, and we suspect that implementing ES2015 Modules and the module loader spec will change how we look at the problem of reusable components. To that end, support for a restricted subset of <script type="module"> should land in Firefox Nightly builds tomorrow (https://bugzil.la/1240072), so we're making significant progress on that front.
It may be easy from an internal implementation point of view, but from a usage point of view I disagree, unfortunately. It disallows using static html/css only web-components which strongly goes against the grain of HTML's intent of providing markup only documents. Granted it's a minor use case these days, but it is important to the underlying philosophy of HTML and strongly limits the feasibility of building a static html-only _and_ modular website. Let's say a client's JS is disabled (via NoScript which addons.mozilla.org reports as having 2 million users). In this case it'd completely break html-only web-components for 2 million plus users, even components used primarily for CSS modularity. Unless I'm missing something with how ES2015 modules will work...? This seems like a major lack of support for any notion of custom-components and particularly of user preferences on how they view the web. Which is probably why this seems such (to me) an odd stance for the Firefox developers to take. I'll be following the ES2015 modules to figure out if I'm missing something in this!
Here's an example use case: tailoring static html based documentation with use case based theming. Depending on the clients origin and/or browser type, it'd be trivial to redirect http html-import requests to support various themes/clients. While technically possible without native html-imports, it'd require either JS support or dynamic rewriting of html files (essentially run-time vulcanization on the server) rather than a simple http redirect to a modular component.
> To that end, support for a restricted subset of <script type="module"> should land in Firefox Nightly
It'll be interesting to see how ES2015 Modules and module loader spec eases the day-to-day usage of web-components. Even with polyfills Firefox seems to be difficult to coax into loading custom components -- vulcanizing the html is the only method I've found to be reliable. I'll test drive the <script type="module"> and file any bugs that I might find, thanks for the heads up! The script module should support x-tag library pretty well.
All of the browsers are on board to ship Web Components and have been actively engaged in the standards discussions.
Safari just shipped Shadow DOM v1 in Safari Technology Preview. https://webkit.org/blog/6017/introducing-safari-technology-p...
Edge are actively working on Shadow DOM and Custom Elements support https://blogs.windows.com/msedgedev/2015/07/15/microsoft-edg...
Firefox is working on Shadow DOM and Custom Element support as well.
The odds are good that other browsers will begin landing these features in their stable releases in 2016.
lacks older browser support
mixing it with other virtual-dom libraries like react is not easy or straightforward.
 : https://www.polymer-project.org/1.0/resources/compatibility....
See here for the details:
Vulcanize is pretty cool, though we had a hard time getting it to work right in a gulp workflow.
I've got a project in the pipeline for which I was planning to use Cycle.js, but I've always had a niggling interest in Polymer.
What I'd like to know is whether Polymer offers any significant pros, because after consuming some of the docs the offer seems to be a better reimagining of ASP.NET WebForms. What are the benefits?