Hacker Newsnew | comments | show | ask | jobs | submit | jarek-foksa's comments login

One of the primary selling points of React is better performance of the virtual DOM, which apparently has no prove in benchmarks [1].

[1] https://aerotwist.com/blog/react-plus-performance-equals-wha...


That article has been discredited pretty thoroughly.

Anyway, it's not a panacea. It can't do much if you're forcing a reflow every few milliseconds.


As a complete outsider, do you have a link or two to support that so I could learn more?

It would seem to me, again completely without knowledge, that a virtual DOM seems suboptimal. Web browsers should be able to optimize for that use case a lot better.


Pete Hunt's talk is good. https://youtu.be/x7cQ3mrcKaY?t=1112 I'd recommend watching the entire video if you have time, but that's the gist of it. More detail:


It is challenging to make a large app with lots of state be performant when you manually mutate the DOM, and there's no way for the browser to fix that because of the way things work.

For example, if you move an element, and then request the position of another element immediately after, the browser will be forced to recalculate layout before giving you an answer. With lots of state and lots of updates, it's very challenging to keep this in the right order.

As virtual DOM is an abstraction over the DOM, you can make a vanilla app that is much faster, but as your app grows, that quickly reverses.


Shadow DOM will allow browsers to optimise their rendering code to re-render encapsulated DOM subtrees independently of each other.

Virtual DOM makes sense when (A) you have a huge tree and (B) you show only a small portion of that tree to the user at any given time.

For example a long scrollable list or a multi-line text editor would be better implemented with virtual DOM, but using it everywhere is an overkill.


If by discredited you mean lots of React fanboys whined about it then yes. If you mean anyone showed actual evidence to the contrary, then no.


Call people fanboys all you want, but after the code was released, numerous glaring issues were found, and still haven't been addressed.


The more correct selling point is that you should be able to get "good enough" performance (usually meaning consistent 60fps) with properly optimized React code.


There are at least two such libraries for C# - XWT [1] and Xamarin [2] which AFAIR is based on XWT.

[1] https://github.com/mono/xwt

[2] http://xamarin.com/platform


I wouldn't put the word "font" in the company name. In the near future SVG symbols may become the preferred way of shipping icons on the web.


Why Vector.isZero() is checking whether a vector length is smaller than 0.0001? Why this arbitrary number and not e.g. 0.00000001?


Dunno about this case, but probably because floating point arithmetic is a bitch. Try the following in Chrome's developer console:

    (0.3 - 0.1 - 0.1 - 0.1) === 0.0
    > false
    0.3 - 0.1 - 0.1 - 0.1
    > -2.7755575615628914e-17
In order to get helpful results, we're gonna have to pick some semi-arbitrary epsilon. Still, my problem domain might require a different epsilon than they expect; even if they have a default, the API should allow me to specify my own choice of epsilon.

They might also want to consider being more nuanced for the equals method than just taking the difference and comparing to zero. See http://floating-point-gui.de/errors/comparison/


You're not the only one! The dev Branch has this fixed so you can pass your own epsilon.


Hooray! Thanks for pointing it out :)


While we're here, the article you linked recommends equality comparison w.r.t. "the maximum number of possible floating-point values between the two values".

Any idea on how to go about this in JavaScript? The binary representation of the float isn't as easy to come by (compared to C, for example), but I wonder if you couldn't get a decent approximation with Math.log2.


You can interact with binary representations of numbers in JavaScript using typed arrays – https://developer.mozilla.org/en-US/docs/Web/JavaScript/Type....

For example, you can get an array of the bytes in a number `n` with this:

    new Uint8Array((new Float64Array([n])).buffer)


None of the frameworks you have mentioned support proper sandboxing of component internals. They are leaky [1] and incompatible with each other abstractions. We need at least some basic native shadow DOM support in order to hide implementation details and avoid naming clashes.

[1] https://en.wikipedia.org/wiki/Leaky_abstraction


Agreed, most libraries and frameworks have nasty leaky abstractions; that is part of the cost of choosing to add those packages in your architecture. And heck, the browser itself is a clown car full of leaky abstractions, we deal with the problem everywhere.

I think successful open source projects have a pretty good track record managing leaky abstractions since they have so many users. JQuery did a good job in this area over time, papering over many of the leaky abstractions present in the DOM across browsers. Looking at the bug tracker for React, I think they are doing okay too after a rocky start. Haven't really tracked Angular, but I hear Ember is a pretty well run project too.

Shadow DOM does sound pretty good, but not essential to componentization if you're managing your ID's carefully. Honestly, I don't care if a framework or the browser manages my component scopes, I just want it taken care of.

I always thought scoped stylesheets would be more important to non-leaky UI component development, but it seems like that is yet another dead-end experiment that didn't catch on. http://caniuse.com/#feat=style-scoped So we fall back to tooling again, and get something like SASS to manage and build our monolithic stylesheets.

I prefer linking to Joel Spolsky's blog when describing Leaky Abstractions since he coined the term: http://www.joelonsoftware.com/articles/LeakyAbstractions.htm...


ID prefixing won't prevent component internals from leaking to event listeners, querySelector results, TreeWalkers/NodeIterators, innerHTML, and many other APIs.

Native shadow DOM allows me to write components with strict encapsulation on pair with the built-ins such as <button> or <input>, neither framework comes even close to that.

Styles defined inside shadow DOM are always scoped to the local shadow tree, so there is really no need for style[scoped]. Chrome and Opera (and maybe Firefox) support CSS encapsulation inside shadow DOM making the CSS preprocessors less needed.


Shadow DOM does sound pretty cool, I wish its design allowed it to fit in better with existing DOM.

I think react components give you pretty good isolation. You don't spend any time querying by selector or innerHtml, you just update your model, it re-renders the component tree, and then a diff of the DOM yields the mutations it will do on your behalf. Event binding is similarly abstracted. It's a different paradigm than what you're describing, and I think it works well for a lot more use cases than just "componentize the DOM".

I'd love to be able to combine shadow DOM and react somehow (assuming wide browser support emerges). I bet a lot of performance optimizations could be derived in the browser from native Shadow DOM.


Evil and good are not real things, those are abstract, subjective and high level concepts that exist only in our brains. You can easily reprogram a human ("teach" or "brainwash" using the subjective terms) to classify death, slavery and misery as either good or evil.

All that matters in nature is whether something works or does not work, not whether it's good or evil.


Why window.Symbol looks like a constructor, but works like a factory? Wouldn't it make more sense to have either a regular constructor (let symbol = new Symbol()) or a regular factory (let symbol = createSymbol())? If it looks like a duck, it should also walk like one.

When subclassing, what is the advantage of [Symbol.toStringTag] getter over the toString() method override? Is it just another way to do the same thing?


The Symbol workaround cannot ever clash with uses of `with`. Granted almost everybody considers `with` to be a bad idea nowadays, but that's part of the rationale.


Are they worried about being killed by famine (after having eaten their own cats and dogs), war or pandemic soon? If not, then they are not "struggling to live", which in turn means there is no need to reproduce that much.

From the biological perspective, excessive population growth can be dangerous to the species [1].

[1] http://en.wikipedia.org/wiki/Behavioral_sink


Probably not but I can't see why "to be killed by famine" is a relevant point here (A lot of alive germans had experienced the fear to be killed by famine in their past, that's for sure, as many other people that was children in the wars and postwars).

Today 7 millions of germans live with minijobs and for most of them is its only source of income. I don't really see those people reproducting like bunnies, but well... I can be wrong. Is a different kind of poverty, not the classic one.

> Excessive population growth can be dangerous to the species

I guess that you don't encourage the reproduction of this people?. Again, this is not the point here. We are trying to understant why germans have a low natality, not if this is convenient or not for the rest of the planet.


Those who remember famine and war are usually past their reproductive age. The young people take it for granted that they will live until their eighties, even on their low paying jobs.

If poverty was causing reduced birth rate, we should have gone extinct long time ago. I suspect though that increased competition for non-essential resources and territory (like a new car or a house) might somehow turn off our reproductive instincts because in the past this was usually a prelude to self-destructive competition for essential goods.

I'm not encouraging or discouraging anything, I just don't see the problem. Everything seems to work according to the nature. The only thing that needs adjusting is the social security system which is based on the flawed assumption of infinite population growth.




The only purpose of the viewBox attribute is to define a rectangular area in the abstract coordinate system.

How that area will be fit into the viewport is determined by the "preserveAspectRatio" attribute.

If you don't define any viewBox then there will be no scaling - 1px in the viewport coordinate system will be mapped to 1 user unit in the abstract initial user coordinate system.


Type Fu [1] uses this approach. You can choose between random words or random letters. The first level starts with letters or words that can be typed with the home row keys and each subsequent level adds more keys. The levels are optimised for Qwerty, Colemak and Dvorak layouts.

[1] http://type-fu.com/


Cool! Too bad I can't really figure that out without paying for it. Probably won't ever try it since I already type well.



Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact