Hacker News new | comments | ask | show | jobs | submit login
Relay Technical Preview (facebook.github.io)
474 points by cpojer on Aug 11, 2015 | hide | past | web | favorite | 106 comments

I'm totally cool with Facebook mining my data if their open source keeps up this pace. GraphQL + Relay are total game changers for structuring web + mobile applications. Code bases get cleaner and more reliable. Less data gets sent over the wire. Other cool libraries are going to be built on top of Relay (I'm pretty excited to see what can be done now with ClojureScript components in .cljc files).

This is so awesome. Much love to everyone at Facebook that has made this possible. With React, React Native, Rebound, GraphQL, Relay etc... You're saving us all from drowning in complexity when buiding web/mobile apps and I love it. Keep fighting the good fight.

So I'm drinking the React kool aid, anyway, but this stuff has hardly been out in the wild. People have been saying tech X saves us from complexity for decades (OOP, noSQL, Knockout, Angular, etc...), and after a few years of use we realize it's not actually a magic bullet. Again, I'm drinking the kool aid, but maybe we should slow down on the hyperbole. Unless there are reasons to believe that things really are different this time?

Also, totally not okay with how FB mines data; they don't get a pass on that. Their OSS work is top notch, though. Now only if someone would take their own technology and beat them with it. A little guerilla warfare.

At a more general level:

> People have been saying tech X saves us from complexity for decades (OOP, noSQL, Knockout, Angular, etc...), and after a few years of use we realize it's not actually a magic bullet.

Throughout all your years of education, you were challenged the same amount. Doesn't mean you weren't doing/learning more.

X did save us from complexity. We just, continually, keep writing increasingly complex things.

Yesterday's sites were rendered on the server, with all state stored on the server. Today's are rendered on the client, buisness state on the server and rendering state stored on the client. Tomorrow's will distributedly store all data on the client; Think bit-torrent + encryption + web-rtc. Day After sites will be rendered on the server again; because we are now working with extremely "complex" 3D hologramed Apps.

This is spot on.

But, I disagree with your last statement:

> Day After sites will be rendered on the server again; because we are now working with extremely "complex" 3D hologramed Apps.

Data is data is data. When the problem of distributing, syncing and querying data distributed between client and server is solved, it is solved forever for all classes of application which are built on that paradigm.

This is true whether you're rendering a 2D web page or a 3D holographic image from that data on the client. There would never be a need to go back to server management of data or state - that's why leaps forward in this space, which React is and now Relay looks like it will be, are incredibly exciting and pull the whole industry forward.

I would suggest a more likely 'Day After' problem would be managing more granular distribution of data and state through networks of clients, i.e. apps built on the peer-to-peer IoT paradigm rather than today's dominant client-server paradigm.

The distinction between client and server is already evaporating, with things like server-side dom bringing client functionality to the server and service workers bringing server functionality to the client. I expect the next generation of distributed databases will include the client into its storage layout. However, that will only make it more apparent that the fundamental problem of synchronizing state between multiple machines is an unsolvable one, thanks to the CAP theorem. By unsolvable I mean that the intuitive expectation of the user is an always available system that is completely consistent, and we know we can't give them that. I feel like we're heading into the data processing equivalent of the AI winter, where the wet towel of reality will dampen much of the hyperbole around p2p and IoT.

Fair points!

I think we agree on everything. I was honestly just being coy with "Day After". I don't believe I can adequately predict tomorrow's complexity issues. I just picked something arbitrarily "complex", not really giving it much thought.


Probably: s/coy/silly

I think you need to understand what the issues are with previous frameworks and why React is different.

React does save time. Maybe you've not built sufficiently complex apps in which state has bitten you in the ass, which isn't a bad thing. From experience, before React it was a nightmare as soon as the first dependency issues arose (whether that's data loading, UI flows or component hierarchies).

React's fundamentals are in managing state via composition and functional-like declerative programming; it's hard to argue that any other frameworks take this clean, pure approach and really solved the problem.

With Flux, Relay/GraphQL and ES7's decorators it's easier than ever to build pure, stateless components that are drop-in replacements for Angular's controller hell, or Marionette's awkward controllers.

OT: At this point, I am so confused at what "drinking the Kool-Aid" even means. People on the Internet seem to use it for every meaning possible. I was surprised to see that it has its own Wikipedia: https://en.wikipedia.org/wiki/Drinking_the_Kool-Aid.

And it seems like you mean exactly the opposite of what you've said, based on that definition - you're not drinking the React Kool-Aid, or drinking the anti-React Kool-Aid.

I'm not trying to impose grammar OCD on anyone by any means. Like I implied, I'm probably one of the more ignorant people about these phrases. I just think it's confusing when people say things like "could care less" and "drinking Kool-Aid" when they actually mean the complete opposite. It's very bizarre to me but OTOH I guess it's fascinating that English is just that flexible.

I think his suggestion is that he is drinking the Kool-Aid while internally questioning whether it is a good idea for him to be drinking the Kool-Aid. Which, I agree, is sort of a contradiction, but a parseable one.

This makes more sense. He's basically saying, "I love this, but let's not get too carried away." I now see where that comment's coming from!

Thanks. Nit-picking again, but drinking the Kool-Aid does mean "unquestioning, ... without criticism" implying that those who are drinking wouldn't have such reservations. That's what I was confused about. Oh well. Language is evolving, what else is new.

It might not have been the original meaning but when you think of a cult member drinking the Kool Aid (that will kill them) they might be a true believer with no doubt in their mind, but they could also be someone who isn't sure but is being led by peer pressure ao on.

When you accuse yourself of drinking the kool-aid, (or discuss yourself drinking the kool-aid), it carries along a degree of self-recognition: "sure, maybe I'm drinking the kool-aid, but at least I realize it".

Every top-level comment so far has started about by saying that this is exciting and awesome. The rest of the comments are full of accolades. That is drinking kool-aid. And it's okay to do that. Now let's get back to our hacker roots and start tinkering to see if this carries any weight. Everything is a trade off in technology - so what are we gaining by using this? And what's the cost? Is there a clear benefit?

Thanks. So another definition of drinking Kool-Aid would be "praising highly" I suppose.

To "drink the koolaid" is to blindly place trust in or unquestioningly accept something (that could in all reality by a solo cup full of poison).

So it's not so much that they're praising highly in and of itself as that they're (implicitly) praising highly because they're buying the hype without questioning it, because it's not like they've used it in anger yet or heard 3rd party testimonials.

I mean, this isn't like a kickstarter project where you have a nice video from some random person and nothing but blind optimism it can actually deliver. This is Facebook open-sourcing a library that has been in their production codebase for multiple years. It solves a specific problem that most developers are intimately familiar with, and has already been praised by people in the tech community who aren't officially associated with Facebook. It's ok to get excited about this :)

Yeah, I mean that's what I was saying I thought the definition was in my original comment. But taken the way the OP apparently meant it, it does mean something similar to "praise highly" or "very excited about." The meaning's evolved.

Still OT, but it is pretty crazy how language develops over time and usage. Different people seeing a word, observing its context, and reusing it in similar context (or a context that person perceives to be similar), and thus potentially changing the meaning of the word ever so slightly.

That said, he did use the phrase correctly, well, sort of. He was saying he believes in React, and then noted some caveats with people's reactions React. Which isn't properly "Drinking the Kool-Aid", but his meaning gets across.

He meant exactly what he said.

Haven't you heard? There are tons of new JS framework being churned out annually, and each time it's hailed as the holy grail to all our problems.

Can you expand on what you mean by "what can be done with ClojureScript components in .cljc files"?

Is Facebook also working with ClojureScript?

Well the whole idea of Relay is to couple together the rendering logic of a UI component with the data fetching requirement of the component. So if you change the rendering logic of the component, the data that is fetched from the server will also change with it and you'll never be over/under fetching data.

But inside the UI component you're really just defining the schema of the data to be returned from the server, you're actually implementing how to fetch that schema from the database somewhere else in your codebase.

Clojure 1.7 introduces a new file-type with extension .cljc that can load in both Clojure and ClojureScript. Which means you should be able to define both your React component AND the Server side implementation of the data fetching schema all from one file. Pretty frickin' cool if you ask me. One file, one component, that knows how to render itself, fetch it's data and serialize it's data.

ClojureScript community is pretty close to React (see e.g. Om), and since React 0.13 supports plain classes, they can be written in CS. So FB itself might not be doing anything with CS but enables the CS community to work in the same ecosystem.

I'm really excited about this! While working on an "isomorphic" app, data fetching gets incredibly complicated. There are many edge cases. For example, when rendering on the server, you have to block all renders until all data fetching is complete. But on the client, you can show the view with a "loading" indicator, as in not block. But you only need to fetch data for that route on the client if it hasn't been fetched on the server...the rabbit hole is full of wheels you don't want to reinvent.

I'm hoping Relay solves the data fetch problem in a way that makes isomorphic applications much cleaner.

Meteor already does this. In your template (Blaze) you can wait for your subscriptions to be ready and in the meantime render a loading icon or something else. Check it out!

In this case, I think the more accurate example would be Meteor + React [1]. You can still show the "wait for data to be ready" state as you described. (For those interested, follow the linked tutorial.)

The Kadira team have even gotten Server-Side Rendering working with React [2][3] while no-one has yet for Blaze because the Blaze templates aren't currently available on the server due to how the bundler works. I'm sure MDG will make them available soon and then we'll be able to have SSR with both React and Blaze.

1. http://react-in-meteor.readthedocs.org/en/latest/

2. https://github.com/kadirahq/flow-router/tree/ssr

3. https://kadira.io/blog/meteor/meteor-ssr-support-using-flow-...

you have to block all renders until all data fetching is complete.

Not if you use a server side templating engine that supports async rendering like MarkoJs [0][1].

[0] - https://marko-progressive-rendering.herokuapp.com/ [1] - https://github.com/marko-js/marko

I'm spoiled by the niceness of JSX. It's a very clean and expressive and easy to read language, because it's only what I already know, Javascript and HTML. There are no APIs to learn (except for some minor cases, like inline comments). This template language looks like it would only have downsides compared to JSX. It's a new syntax to learn, and less expressive. This templating language even adds HTML-like attributes! What a mess.

Also I don't think the templating language has anything to do with the nature of blocking vs non-blocking data fetching.

Also I don't think the templating language has anything to do with the nature of blocking vs non-blocking data fetching.

The previous commenter was specifically talking about the issue where the client side of an isomorphic application can "show the view with a 'loading' indicator", but generally the server is not able to stream out the full HTML until all data has been gathered and processed. The templating engine that I mentioned has a solution to that specific issue.

For example, in the first link, you can see that even though the template is rendered on the server, and has a data source with a 3 second delay, it does not prevent the rest of the template from being rendered and sent to the client, and includes a "loader"-like message (you could easily plop a loading gif in there instead of the textual message in the example).

So it does seem relevant to me.

To clarify, the loading case I mentioned is related to single page apps. The server isn't the one rendering the page when the loading indicator pops up, that's entirely a client side render ("isomorphic" being the ability to fully render on client or server side with the same codebase). The server doesn't need to stream a page with a loading indicator in the HTML. Being able to stream part of the app while data is loading (like the header) does seem like an interesting case, but it would only matter for the initial page load in an isomorphic SPA.

This templating language even adds HTML-like attributes! What a mess.

        <for each="item in items">
I have no problems with a mess that looks like that. Anyone on my team could pick that up with no problems whatsoever.

it's only what I already know, Javascript and HTML

But it's not. "className", much?

Oh boy, "className" and "labelFor". See, wasn't that hard to learn JSX if you already know HTML.

I think you meant htmlFor, no? But I completely agree with the sentiment that it is nice to stay as close to plain HTML and JS as possible.

Yeah, thanks. I don't actually remember if I've ever used it :D

I'm not saying it's hard. But {?variable} isn't hard either.

w3schools might have gotten a bit better lately, but please use MDN as a reference:


They offer so much more context.

Sure. But that's different than <div className="blah">.

Not much. These are actually the Web API attribute names you should use when accessing these attributes from a non-HTML language (e.g. JavaScript).

The React folks didn't make them up.

This is very exciting. Facebook's commitment to open source never ceases to impress me. They could keep this technology to themselves and have light years or we'd only read it in academic papers, like Google has done with its core technologies, and someone else would have to reverse engineer them. But Facebook gives the entire code base. No other large company I know of has such a strong commitment to open source.

> we'd only read it in academic papers, like Google has done with its core technologies

Nit: Google has been pretty generous in supporting developers outside the company. Eg., see Golang, Kubernetes, JS closure library, protofubf, bazel, and many more at github.com/google.

Not to mention being a top contributor to the Linux kernel, clang/llvm, mysql, doing almost everyone's security research for them (including, notoriously, finding most of the issues that get released in Apple security patches), etc. Facebook sure does a better job of evangelizing their own open source stuff, but I'm not sure that means they're actually contributing more. Even some of Facebook's "contributions" are ripped off from Google, like buck.

Buck was not "ripped off" from Google any more than Hadoop was "ripped off" from Google.

Buck is a nearly verbatim clone of the build system used inside Google, except that the word "blaze" has been replaced by the word "buck", and it was written by xooglers at facebook.

Netflix has a pretty awesome open source library, they are worth checking out as well. https://netflix.github.io/

It's funny you should mention Netflix. They demoed something called Falkor two years ago that is/was extremely similar to Relay/GraphQL. It's unfortunate that it's made so little progress in the past two years, but it wouldn't surprise me of the talks on Falkor had influenced or inspired Facebook's implementation.

For those that want links:




I remember from a David Nolen talk that Relay/GraphQL and Falkor were actually developed in complete isolation from each other, believe it or not.

The fact that these two companies came up with the same idea independently just reinforces how truly beneficial it must be.

I can't wait to try Om Next's implementation of this concept in my apps!

fwiw Jafar announced last week it would be open sourced in the next couple of weeks [1], and a number of people have access already and are helping with documentation and the like.

Falcor and Relay are conceptually similar but Relay is much more of a query engine, and Falcor isn't so much.

[1] https://plus.google.com/events/ca3l6qalpu0uqcce58a379006m0

> two years ago

AFAIK Falcor was announced just six months ago in spring 2015; I assume it is making progress towards a release. Relay wasn't based on it.

You're mistaken. My first exposure to it was a talk at QCon in Nov 2013. It wasn't posted publicly online until Feb 2014, but here's the link: http://www.infoq.com/presentations/netflix-reactive-rest

Edit: I meant to write "...stay light years ahead."

The release commit is really the best: https://www.dropbox.com/s/9gx377scddhxo95/Screenshot%202015-...

All I can say now is: ༼ つ ◕◡◕ ༽つ Got RELAY

Just putting a live link to that commit here: https://github.com/facebook/relay/tree/2a86be3e71cdc6511fa99...

Have you seen the actual code of the mutations?


It is ... massive!

The API for mutations is one of the things we were least happy with, but we thought it was better to ship and iterate rather than get stuck on wanting to perfect everything out of the gate. Hopefully we can remove most of the boilerplate over time.

That said, mutations in Relay are very powerful. This one does an optimistic update and handles inserting both the optimistic update and server response into the graph at the correct place. The fat query means that we can use the same mutation everywhere -- even across indifferent apps -- Relay only refetches the fields from the fat query that are used by the current view.

Yeah, as someone who knows a fair amount of JS but almost nothing about React, that todomvc implementation does not particularly scream "this is the framework you want to learn".

Of course... but I think the guys at Facebook are right in that getting something out is better than just talking about it alone. Also, the community will probably come up with something that's a bit more lean/understandable in the end.

Look at the progression of flux talks, to a desire for isomorphic js (with flux), to fluxible app from yahoo, flummox, fluxor, alt.. and finally we get Redux, which is actually quite lean and a fairly clean abstraction with almost no boilerplate. Though arguably you can do a lot of it with RxJS alone.

I wouldn't surprise me that within two years we see something similar happen regarding Relay and GraphQL... for that matter, it wouldn't surprise me to see implementations that target specific database back ends easily. IE, you establish your object schemas, associate them to your db collections/tables, and establish filters based on client permissions. API/Service toolkits for data access using the protocol in question not the specific implementation necessarily.

One thing I think is a poor design choice on the part of the implementer of this example:

Note: Not a design choice of Relay

Is that each Mutation file is redundant. Each Mutation file corresponds one - one with a component file. I think it does a big disservice to React itself (specifically JSX), which advocates for collocating View and Template because of cohesion.

Maybe its subjective.. and I can surely see the reasons for separating them, but I hope yungsters is reading this and collocates the mutations.

Mutations are global, not view specific. They can be triggered by multiple views, and are smart enough to only refetch the data that is currently being shown.

For Facebook we can write a single "like" mutation and share it between comments, posts, photos, etc.

Yeah... That's not something I would enjoy working with. It's a nice idea, I'll add my hopes that some of the boilerplate will get removed, but I can't see myself working with this on a production app any time soon.

Can anyone explain what this is all about? I followed the tutorial, but still cannot understand what are the ideas behind Relay/GraphQL. There must be some principles why the application is structured that way, but without seeing them, it looks like layers of indirection for the sake of complication.

When React came out, the core ideas were crystalline, and I was able to see the advantages in 5 minutes and to actually start doing something in 15. I would be happy to share the excitement for Relay... anyone care to explain? :-)

Nothing explains the motivations for Relay/GraphQL better than their introduction of it earlier this year, I'd recommend watching the video: http://facebook.github.io/react/blog/2015/02/20/introducing-...

Is there anything in text format? I really cannot follow the whole video right now. I tried to follow the slides, but they do not make much sense alone.

Here are some examples of things I do not understand.

The article below the video states that "By co-locating the queries with the view code, the developer can reason about what a component is doing by looking at it in isolation" but nowhere in the tutorial treasure hunt application I see this in action. There is a single React component called App, and I fail to see where it declares its data dependencies.

Moreover, according to the article, "Each component specifies its own data dependencies declaratively", but it seems that a core concept is that of mutation, which does not sound very declarative (although I am not sure I understand what that is exactly). The code for the mutation seems to access the this.props (which I assume live on the component). This looks like a cyclic dependency between components and mutations. The mutation also access a query over something called CheckHidingSpotForTreasurePayload which seems to be defined only in schema.json, although - if I understand correctly - that json is auto-generated.

All of this leaves me confused, which is why I was asking for a more conceptual explation. In other words: given the aims expressed in the article you linked (declarative data dependencies and so on), what is the reasoning that lead to the actual design of Relay?

I wish facebook had not used "GraphQL" as their name for their SOAP/REST/RPC replacement. When I hear GraphQL I think of a query language for graphs. Like in (for instance) http://dl.acm.org/citation.cfm?id=1368898 . There has been a lot of cool research over the years on query languages for graphs. Facebook's "GraphQL" is totally nerfing Google's ability to find it.

Any idea why they decided to use string-based queries for GraphQL?

I feel something that can be composed programmatically without having to deal with string concatenation like Falcor's queries or the Datomic Pull syntax proposed in Om Next [1] could be more flexible and robust. I may be missing something.

[1] https://www.youtube.com/watch?v=ByNs9TG30E8

You can always use JS/ES6 string templates (using Babel or Traceur).

The major advantage of Relay/GraphQL seems to be if you have one monolithic data model for your entire codebase. You are in effect, binding your views directly to your backend. This is great if you are a company like Facebook with a single graph holding all data.

Sadly working as a consultant, using Relay as prescribed offers little use for me as I port from client to client with widely different data models. I am interested in maybe using Relay in parent React components to keep logical separation between my models and views.

This is actually incorrect. The Relay/GraphQL folks explicitly call out the concept of directly exposing your persistence structure via GraphQL as detrimental. Instead, you simply describe what your business models are with GraphQL (regardless of how they're stored), just as you would with a REST api. GraphQL acts as an abstraction layer on top of your persistence. The key difference with GraphQL vs a REST API is that you don't have to commit specific endpoints that return specific models, the clients can simply pick and choose (within the confines of what your GraphQL schema allows).

I've skimmed the Relay and GraphQL repos, but I can't for the life of me figure out which database backends are supported. Can I put this in front of Postgres? Redis? How do I stand this up in front of an existing DB?

From what I understand you build that bit yourself


As I understand it, this is still 100% client-side. GraphQL is basically a new schema language for defining a web service contract. You still need a complete server-side implementation to satisfy the schema. And unless you're doing it in node, there is probably no tooling in your language.

GraphQL is very new at this point, so there aren't many backends/databases that support it. If it catches on I imagine that will change very rapidly.

How does this compare with Flux? Is it intended to be used with Flux or instead of Flux?

It should be used instead of Flux.

The one caveat is that Relay's store is a representation of the data you've defined in GraphQL. So you can't use it to store data that shouldn't exist on the server. That's something we'd like to fix soon.

Is it not advisable to use Relay alongside Flux? I would think they could compliment each other nicely.

You can, and we do exactly that for some apps at Facebook. Ideally Relay could completely replace Flux. This is something we hope to build soon.

Can someone explain to me how are they using all JS (node incl server-side rendering) stack in a company that is known for using PHP on the backend ?

Do they have a specific PHP-to-Node bridge on the server side? If they write isomorphic code, either they are writing apps completely separate from PHP or they have some kind of integration (Node-in-PHP?) running?

I would be grateful for hints, I'm looking into working more with FB tech but I can't do Node on the server right now. Knowing how their architecture looks like with PHP/Hack on the backend would really help.

We are experimenting with it and I've built a service for JS at FB. I'm convinced that server rendering should be transparent to a product developer – ie no special work should be required and the server rendering infra should decide automatically whether it will render on the server or not.

Relay has a special server rendering mode that we created that I'm hoping to speak about soon. Until then I'm afraid I can't say more than that :)

Their GraphQL server is written in PHP. They've made the reference implementation (graphql-js) in JS + the server in Express but they're not using that in production.

For server side rendering of the React app, they use https://github.com/reactjs/react-php-v8js (well they probably use a slightly customized version but this is what they open sourced).

We are not using php-v8js for server rendering at FB.

I stand corrected. :)

Relay and GraphQL are backend agnostic which means you can write a GraphQL server in any language and then use it in connection with Relay and ReactJS. Even a Ruby implementation is already there: https://github.com/rmosolgo/graphql-ruby

Exciting. So does this do away with implementations of Flux (like the excellent Redux), or is there room for them to work in concert?

The idea of Relay is cool. And GraphQL is indeed a nice thing for mobile engineers and product developers. I think its a novel way to query data.

Note: i'm mainly covering GraphQL

What i'm missing is implementations. For graphql you want a Java/python implementaion ready that can be hooked into your storage engine.

For iOS / Android you need some code generation tools that can generate your clientside business objects from the graphql schemas.

When i think about it, GraphQL combines the best of the SOAP/XML era (schemas, type safetype, client generation) with the new REST/JSON world (low footprint, simple structures).

However, it is still very difficult to adopt it. And most of the times, in a startup environment, you are faster implementing a rest api. And building your app on top of that. A schema (something like swagger, jsonschema) might help with client side code generation.

This is the best commit message:

༼ つ ◕_◕ ༽つ Give RELAY


Wow, looks like what we've been doing for the last 4 years is very similar to the design of Facebook's tools they've been open sourcing. That is some serious validation for our architecture!

(For anyone who's interested here was our design: http://platform.qbix.com/guide/tools, http://platform.qbix.com/guide/messages)

It doesn't look like relay at all, could you clarify what design is the same ?

This stuff:

Relay coalesces queries into batches for efficiency, manages error-prone asynchronous logic, caches data for performance, and automatically updates views as data changes.

Relay is also component-oriented, extending the notion of a React component to include a description of what data is necessary to render it. This colocation allows developers to reason locally about their application and eliminates bugs such under- or over-fetching data.

We do the same thing (we call it the getter and batcher pattern). See http://platform.qbix.com/guide/patterns

Briefly, the way it works is that you request a certain object, and don't have to worry about batching, throttling, caching, etc. You can also say "please call this function when all the following objects are fetched".

Our platform is also heavily based around events for which handlers are automatically unregistered when pages and tools are removed. So it's same as with facebook.

Plus we already support most of the following:

Offline support. This will allow applications to fulfill queries and enqueue updates without connectivity.

Real-time updates. In collaboration with the GraphQL community, we're working to define a specification for subscriptions and provide support for them in Relay.

A generic Relay. Just as the power of React was never about the virtual DOM, Relay is much more than a GraphQL client. We're working to extend Relay to provide a unified interface for interacting not only with server data, but also in-memory and native device data (and, even better, a mix of all three).

Finally, it's all too easy as developers to focus on those people with the newest devices and fastest internet connections. We're working to make it easier to build applications that are robust in the face of slow or intermittent connectivity.

See http://platform.qbix.com/guide/messages

So graphql is basically a query language and optimizer? Why not have a relational algebra library, query (sql, whatever) generator and optimizer as separate things?

Seems like it has some similarites to OData.

BreezeJS is a stand-alone data library for SPAs which takes care of managing the lifecycle of data objects; querying, fetching, caching is all taken care of. Queries use OData by default

Aspects of React, Relay and Flux make me feel like my company's js framework could end up like Leibniz once we release it this fall...

A word of caution: you're probably right. React, Angular, Backbone cover most use cases and are proven in production, well known, and well understood. Unless you're a big company with hard problems, why should anyone pay attention? Not to rain on your parade, I've had the inclination to build my own libraries in the past as well but I always remember why it's not a good plan.

Any new framework these days has to bring a ton of innovation and performance to the table - is your framework faster and easier to use than React? Does it fulfill use cases that React cannot? How many more use cases can React fit than yours? Hint: There's React Canvas, React Native, React WinJS, and you can run it on your server. There's also approximately a billion modules for these libraries combined.

There are plenty of examples of projects that are DOA. If you want yours to be successful you'll REALLY have to sell it. Not that I'm rooting against you! :) If your library is more innovative and more powerful, more power to you! There's nothing wrong with progress. I'm just skeptical because the aforementioned libraries work for nearly 100% of companies.

More undebuggable magic.

I had 12 upvotes here, and then returned to 1.

another awesome gift from facebook, thanks a lot devs! but i am still eagerly waiting for React native for Android.. any updates on its development?

> While working on an "isomorphic" app

now you should say "universal", "isomorphic" was a poor choice of words at first place and led to a lot of misunderstanding(and bad blood between js developers and mathematicians)


> As applied to JavaScript, Charlie Robbins presented the idea in 2011. He called it "Isomorphic JavaScript" which has resulted in years of debate over the poor name. In recent months, the term Universal JavaScript has gained acceptance.

Words take on new, unintended, and incorrect meanings all the time. It's how language works.

When someone says "isomorphic JS" I know exactly what they are talking about. I understand why some people might push back against it, but there's nothing wrong with the term.

"Being abstract is something profoundly different from being vague … The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise." -Dikstra

When you overload an existing term you make it more vague, thus destroying the precision of an abstraction.

> Words take on new, unintended, and incorrect meanings all the time. It's how language works.

No, that's how marketing works. We are developers not salesmen. and as computer scientists respecting other science branches is a duty. Using the expression "isomorphic" in that context is just confusing. And it makes developers sound like they haven't a fn clue what they are talking about.

We are developers not salesmen

Speak for yourself. Some of us are both.

and as computer scientists respecting other science branches is a duty. Using the expression "isomorphic" in that context is just confusing.

That's a fair point, but "context" is the key word. Using "isomorphic" incorrectly in a mathematical context would be one thing. But borrowing the word to use in a largely unrelated context seems fine to me. And it isn't like this is the first time a word has been borrowed and used to mean something "similar or related, but not quite the same". I'm pretty sure this has even happened inside other scientific disciplines, although I'll grant you that I don't have an example at my fingertips.

I would be more sympathetic if they had "borrowed" (more like appropriated) a term from, say, biology or french literature, but mathematics is not exactly a "largely unrelated context" from programming, even if it's JS that we're talking about. What's next, abusing "binomial" to mean "binary"?

We detached this from https://news.ycombinator.com/item?id=10043822 and marked it off-topic (though agree about the word isomorphic).


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact