Hacker News new | comments | show | ask | jobs | submit login
[flagged] Facebook Relay: An Evil And/Or Incompetent Attack on REST (pandastrike.com)
188 points by mwcampbell on Oct 19, 2015 | hide | past | web | favorite | 143 comments



I deal with REST zealots like this everyday :( just the fact that people spend so much time debating what is and what isn't REST should be a red flag that maybe it isn't the answer to everything.

All the simple REST examples make it look like it is the perfect solution for CRUD tasks. But in reality API endpoints are not a straight pass through to the database. An endpoint may do any combination of CRUD tasks, it can't be type casted as a single one with a HTTP verb.

Overloading HTTP error codes is another interesting problem. If the server returns 404 for a resource, what actually happened? Did the web server or my application return the 404. Conflating HTTP and application error codes leads to confusion.

HATEOAS is just superfluous. I've met zealots who will defend it to the death. I just haven't found a practical use for it.

Surface area and complexity are a big deal. REST encourages creating a CRUD interface for every single resource. I've met people who have created literally 100 endpoints upfront for a small application - how is that maintainable?

I like that Facebook is standing up to the zealots. I get attacked at work when I try to create simple API endpoints and they aren't 100% REST (if anyone could agree what that even is.)


The point of the article is that REST, as originally described, DOES NOT encourage creating a CRUD interface for every single resource. That's very typical for the simplistic Rails-style of REST, but that's not the original intent.

And that's just the point and why those of us who care about web-friendly API design get a bit "zealous." Imagine, if you will, that someone's complete argument about the pros and cons of JavaScript were based on pre-node, hell, pre-JQuery use cases. Wouldn't you expect some developers to be a bit upset that you're mischaracterizing an entire programming language?

Keep in mind, a lot of what REST is known for today came to pass as a backlash against XML-deathstar-style architectural designs for web services over 10 years ago. At the time, plenty of folks were simply porting ideas and designs from CORBA-style RPC architectures over to the web. While this is possible, you lose all sorts of advantages of an internet-centric design. And that's really what REST is about: designing applications that work with the web, not against it.


Your last two sentences are hand to understand since you seem to be conflating the internet with the web. They're not the same thing.


What's the difference?


I agree with pretty much all of this.

We developed an API because we had desktop/mobile/website. They all use the same API. I call it REST-ish.

I can't see how people think returning 404 for a "this product doesn't exist" is defensible. You now have two different things intertwined: this product doesn't exist (generally not a huge deal - what if an end user manually edits the url on your storefront?), and this url makes no sense (a huge deal - something is broken).

If you had code that threw the same exception when a product didn't exist vs when the table in your database doesn't even exist you would be ostracized. I don't see how it's somehow OK because it's REST.


HTTP error codes are something of an embarrassment to the web.

We have more codes for April Fools jokes than some of our most common situations we deal with. We have some bizarrely specific codes like Payment Required and Requested Range Not Satisfiable yet almost all real world application responses get dumped into 400, 500 and a few other codes.

How about more specific and useful codes like:

Parameter not Supplied, Parameter Name Invalid, Parameter Value Invalid, Url Path Invalid Format, Could Not Parse Data etc. etc.

This would also allow client libraries to decide better on how to process errors (i.e. as application or systemic errors).

I'm sure folk have better suggestions than I but I can't imagine many folk genuinely feel these existing codes are expressive for the modern (API centric) web.


Agreed, the codes are a crap-shoot. We are working on a redesign of our API and I've pushed for always returning a 200 with a defined JSON structure of something along the lines of:

    {
      "status": true
      "data": { ... }
    }
for success and

    {
      "status": false
      "errorCode": 123,
      "errorMessage": " ... " //Only shown in dev/debug mode
    }
for failures.

We haven't nailed it all down yet but there was no way I was going to do HTTP status codes for some things and some JSON errorCodes for others. It needed to be all or nothing and since HTTP codes obviously wouldn't satisfy it was 200's everywhere...


I think for application status codes it makes sense to always return 200 and do as you have done.

Of course, if your API has received bad data then better to return a 400 and a similar payload.

And if your application has caught an exception which is not part of any form of validation then better to return 500 and the payload.

Got to do the best we can with the hand we're dealt hey!


This is what we do as well. As an added bonus, it makes writing client libraries for our API easier since error handling is more consistent. (Many HTTP libraries handle error status codes differently than 200s, which is annoying since you now have to handle errors in two places.)


> I can't see how people think returning 404 for a "this product doesn't exist" is defensible.

Because that fits perfectly the definition of a 404.

> You now have two different things intertwined: this product doesn't exist (generally not a huge deal - what if an end user manually edits the url on your storefront?), and this url makes no sense (a huge deal - something is broken).

Arguably, the "this URL makes no sense" case is a better fit for a 400 [0] rather than a 404 [1], so those two cases need not be conflated even when using a 404 for the "this product doesn't exist" case.

[0]: "The 400 (Bad Request) status code indicates that the server cannot or will not process the request due to something that is perceived to be a client error (e.g., malformed request syntax, invalid request message framing, or deceptive request routing)."

[1] "The 404 (Not Found) status code indicates that the origin server did not find a current representation for the target resource or is not willing to disclose that one exists."


> I can't see how people think returning 404 for a "this product doesn't exist" is defensible.

To tell Google for example to stop sending traffic to this URL because it's gone.


Is this a thing? Having Google index your REST api?


Which isn't relevant with regards to APIs.


In a lot of ways I think what you are saying is actually in line with what the article is saying: Most people don't actually understand REST and it is leading to a lot of poorly designed API's. Unfortunately, the zealots are often the worst offenders in propagating the misunderstandings. For instance:

> Surface area and complexity are a big deal. REST encourages creating a CRUD interface for every single resource. I've met people who have created literally 100 endpoints upfront for a small application - how is that maintainable?

I disagree, REST does NOT encourage a CRUD interface for every single SERVER SIDE resource. Your rest resources should be modelled after the resources your CLIENT needs. You should not be designing your client resource models based on the underlying server models regardless of using REST, SOAP or some kind of roll-your-own system.


It does encourage more endpoints because I can't include data from two different objects in a response. They should be GET separately. With highly relational data where I need to create an endpoint that returns information regarding 50 different object types, how should I do that? My rolled up response is no longer REST.

When I need to act on that data and call an API that can do a variety of CRUD operations to 50 different objects types, how should I do that in a REST way?

It leads to developers creating a multitude of APIs to interact with those 50 different object types. Just so they can say their API is REST. For someone trying to use the API it's a nightmare.


Nonsense. A URL is a resource endpoint. If a resource is a composition of other resources, it is perfectly acceptable to return all composed resources.

Let's try an example: A shopping application, with customers, products and orders. An order composes a customer and all products that were ordered. You'd have endpoints for retrieving a customer (/customer/<id>/), a product (/product/<id>/) and an order (/order/<id>/). When retrieving the order, nothing in REST prohibits you from returning full information about the user and about the products.

Now, advancing the discussion, the reverse complaint is that it is wasteful returning a full information complement when GETting an order. Easy peasy. Just use proper mime types and an Accepts header. Pass in "Accepts: application/json+order+deep" or "Accepts: application/json+order+plain" from the client side signaling the kind of response you want from the server.


> Pass in "Accepts: application/json+order+deep" or "Accepts: application/json+order+plain" from the client side signaling the kind of response you want from the server.

This is an interesting approach, but I'm wondering how this interacts with caching: how does the client properly differentiate between the two calls ? (I guess the 'Vary:' header has something to do with this, but I never used it directly).

Also, why change the content type when one could just add some query string parameters? With many libraries, both client and server side, this is probably easier to do.


> Also, why change the content type when one could just add some query string parameters?

For the same reason you use HTTP verbs. It conveys more information within the standard. Software in the middle may act upon it. A cache is a perfect example. Imagine the cache sees a request accepting json or xml. The server chooses to return json. A second request accepts only json. The cache knows, from the standard, that the first response is valid for the second request.

Not that you can't configure something like Varnish to reproduce this behavior with query params, but with the standard oriented approach you get the correct caching behavior out of the box.


HATEOAS-y design gives you a great deal of flexibility in how you structure your server side without that complexity bleeding into to the client.

E.g., by using full URLs throughout your API, your client doesn't need to care whether it's getting info from "http://example.org/item/23" or "https://cdn.example.com/aonteuh/d81c7d65-1352-48f0-97fd-dc1c...

Imagine if image tags in HTML were just "<img imgid='23'>" and the browser knew to look them up at "http://example.org/images/23". You could make a functioning web site with that, but it would limit the things you could do, and maybe even the way you thought about linking.

That being said, there are almost always things you can do "better" by constructing the URLs in the client, like reducing response size, and fetching multiple items in a request. It's a trade-off, and I'd be wary of a 'REST enthusiast' who didn't acknowledge that. It could mean that they don't have enough real-world experience to have encountered the rough spots.


I too, am annoyed by those that harp on about /noun/verb ordering and semantics for RESTful URLs. I bite my thumb at them.

But REST is really dead simple awesomeness, when it fits into the activities you're trying to support. Reading blog articles, forum posts. Sure, why wouldn't you?

Complex analytics joins that converge NOSQL data and cc payment identities over time, for advertisers. No. Not for that.

Abortions for some, tiny flags for others!


> Overloading HTTP error codes is another interesting problem. If the server returns 404 for a resource, what actually happened?

I, the dude who made the API, sent you a 404. I sent you a 404 because that's all I wanted you to know. What actually happened is none of your business.

If I wanted you to know, I would have sent a 400 with an explanation attached. But I didn't. Because in this particular case, it was none of your business. 404.

> REST encourages creating a CRUD interface for every single resource. I've met people who have created literally 100 endpoints upfront for a small application - how is that maintainable?

Why are you making your endpoints by hand? Use a framework where you can define models for your resources and have the framework create everything automatically.

If you do need unique code for each of those models ... you don't have a small application on your hands.

> I like that Facebook is standing up to the zealots.

Are they, though? Or are they baking something that is uniquely suited to their own needs?


> Or are they baking something that is uniquely suited to their own needs?

And? If it is also suited to my needs, what's wrong with that?

The existence of Relay and GraphQL doesn't mean that you (the developer of some API) have to use it.


"I, the dude who made the API, sent you a 404. I sent you a 404 because that's all I wanted you to know. What actually happened is none of your business."

As an engineer working for the same company, I very much want to know, because I want to know if I did something wrong on my end, or if something is broken on your end.


Even an internal API should be designed with the same principles as an external one, because through security error, it may end up being exposed.

When designing an API, certain conditions must return a 404 with no further explanation given. These are purely security concerns.

If you try to login with an email and a password, I will return a 404 with no further explanation if the auth request fails. Even if the email exists in the system. This will prevent abuse of the login mechanism to confirm what emails are valid and which are not. The same principle applies for other resources.

On the other hand, if you truly send a malformed request and explaining to you in what way it is malformed poses no security risk ... only a jackass would return a 404. I would send a 400 with a detailed explanation, so that you can fix it.


> If you try to login with an email and a password, I will return a 404 with no further explanation if the auth request fails.

Auth is something different. You should return a 401 with no additional information indeed.

However, the point is that as an API client, you want to be able to distinguish between "this bookshelf does not contain book X" and "what are you talking about, there is no bookshelf here".


In one of my projects I started returning 409 Conflict for the former case here. Its description seems to match this use case:

> The request could not be completed due to a conflict with the current state of the resource. This code is only allowed in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. The response body SHOULD include enough information for the user to recognize the source of the conflict. Ideally, the response entity would include enough information for the user or user agent to fix the problem; however, that might not be possible and is not required.

If you consider the "current state of the resource" to be "it doesn't exist yet, but you could create it".

It seems useful to distinguish between requests that could be fulfilled if the database contained different things, and requests that couldn't.


> Auth is something different. You should return a 401 with no additional information indeed.

Yup, that's better.

> However, the point is that as an API client, you want to be able to distinguish between "this bookshelf does not contain book X" and "what are you talking about, there is no bookshelf here".

I suppose you can return a 403 if it's something you shouldn't be accessing, but then you're bleeding information. You're letting the client know that it exists, but they don't have permission to access it.

A scheme that checks for permissions first and defaults to 403 when they're insufficient regardless of the existence of the resource would work though.

I guess it depends on how you implement the entire system.


"If you try to login with an email and a password, I will return a 404 with no further explanation if the auth request fails."

There's already HTTP Status Codes for failed auth. Use those.


> HATEOAS is just superfluous. I've met zealots who will defend it to the death. I just haven't found a practical use for it.

This is really funny. Are you sure you could not spot a practical use of HATEOAS? Like for example every website that lets you navigate through it using web links, without having you to edit the url by hand!

Not talking about APIs here, as most of the JSON APIs are indeed not HATEOAS, because JSON has no built in support for hypermedia. But most of the internet's web sites that use HTML also use HATEOAS, so IMHO it seems a little bit pretentious to say that HATEOAS is "just superfluous :)


HATEOAS

I love the idea of HATEOAS, but in the end, I've never found I cared enough to jump through the hoops of using it (ie it didn't offer enough value for the work it would take), which in turn meant that the effort of implementing it on the server seemed too expensive too.


> REST encourages creating a CRUD interface for every single resource.

But it doesn't enforce it. You can always limit a resource to just GET. The joy of REST is that if you later need to 'crudify' a resource later on, you can add the necessary verbs on whichever resource you want.


> people spend so much time debating what is and what isn't REST should be a red flag that maybe it isn't the answer to everything

Does not follow. People spend so much time debating what is and what isn't REST because there are a great number of people who do not understand the topic, have no actual interest in learning it properly, and then come to wrong conclusions and start making wildly inaccurate arguments.

Not all Web application problems can be cleanly modelled over REST, so indeed it is not the asnwer to everything, but most can. For those that can, using the REST architectural style is more advantageous than, let's say, remote-procedure calling.

> All the simple REST examples make it look like it is the perfect solution for CRUD tasks.

All the simple REST examples are written by those ignorants I mentioned above. The simple CRUD model falls apart when there is just a little more complication involved. REST does not mean just CRUD, as any decent REST book teaches.

> If the server returns 404 for a resource, what actually happened? Did the web server or my application return the 404.

⁇ If the server returned 404, the server returned 404. This means the client application/the user agent made a mistake, indicated by the leading digit 4.

> Conflating HTTP and application error codes leads to confusion.

When following REST, that conflating is actually a good practice. Not doing so means tunnelling a proprietary, possibly ad-hoc protocol over HTTP, resulting in non-interoperability. There is no need for any confusion, the response body can deliver a precise problem description applicable to the concrete error condition, perhaps even giving an indication how to fix the problem. There is a wealth of status codes <https://github.com/for-GET/know-your-http-well/blob/master/s... and many of them semantically map to typical application error states, and it's okay to fall back to generic codes like 422 or 400.

> HATEOAS is just superfluous. […] I just haven't found a practical use for it.

You need to bring a better argument to the table. Establishing relations between and traversing resources using links and other hypermedia controls is central to the REST architectural style. Every Web browser does this, for timbl's sake!

> REST encourages creating a CRUD interface for every single resource.

Not true. Nothing in REST encourages this. It's the software architect's fault if there are 100 (supposedly underutilised) resources, not the fault of the architectural style.


> ⁇ If the server returned 404, the server returned 404. This means the client application/the user agent made a mistake, indicated by the leading digit 4.

Or the endpoint wasn't configured properly even though the client had the right address. Happens often when developing, maybe another developer changed the endpoint url? Or in production some endpoint hasn't started. (yes this problem is seldom a problem in production or you usually know from other things that the problem is at the server, annoying still)

I've had problems with not knowing if 404 is "person not found" type error or "endpoint address is plain wrong"

"Bad Request" doesn't describe the situation either very well, I've used it myself to eg. signal missing mandatory parameters.

But just using eg. the code 200 as someone suggested and returning your own status message seems a little bit overkill.


> I deal with REST zealots like this everyday :( just the fact that people spend so much time debating what is and what isn't REST should be a red flag that maybe it isn't the answer to everything.

REST isn't the answer to everything, to be sure -- but its actually not the REST zealots who pretend it is. Rather, its the opposite: the people who pretend that REST is the answer to everything are the people who just throw around the label "REST" willy-nilly on whatever solution they've come up with to whatever problem they are dealing with.

REST zealots are generally fine with people offering solutions that aren't REST, especially for problems for which REST isn't a particularly ideal solution; what they object to is people claiming something is REST that isn't, which interferes with the ability of people to understand what REST is and what its good for, or to even understand what is being proposed.

> Overloading HTTP error codes is another interesting problem. If the server returns 404 for a resource, what actually happened?

A consumer really shouldn't, in the normal case, care what generated the error code, they should care what the error code means. 404 means the requested resource didn't exist. What component of the system processing the request made that decision shouldn't matter to the consumer (and may indeed change without any change to the meaning because of implementation changes.)

The people responsible for the implementation of the system returning the status code certainly care, and should have sufficient logging detail to support that.

In any case, HTTP/1.1 specifies [0] for 4xx series errors that "Except when responding to a HEAD request, the server SHOULD send a representation containing an explanation of the error situation, and whether it is a temporary or permanent condition." So, any implementation following the standard will, unless there is a good reason not to, provide an explanation of the error with the 404 response, so to the extent that there is further information that a consumer needs to understand, it should be provided.

> HATEOAS is just superfluous. I've met zealots who will defend it to the death. I just haven't found a practical use for it.

Have you used a web browser? If you have, and you understand how they work, you should be able to think of a practical use for HATEOAS. If its not applicable to your problem, that's fine to, just don't call whatever solution you cook up REST, because it isn't.

> REST encourages creating a CRUD interface for every single resource.

No, it doesn't.

> I've met people who have created literally 100 endpoints upfront for a small application - how is that maintainable?

Probably moreso than an application that puts 100 unrelated pieces of functionality into the same endpoint. Now, if there was too much functionality implemented for the problem, that's not REST's fault, which really addresses how you organize functionality, not what functionality you provide.

[0] http://tools.ietf.org/html/rfc7231#section-6.5


This article is a load of bullshit. When Facebook announced Relay it made immediate sense to me. The problem of fetching the right data from the backend is a pain in the ass for anyone that has built more than a simple todo app.

To me it is like sending a SQL query to the backend and instead of getting rows back you get objects nested in a way you specify.

The problem when using REST "correctly" is that the way your objects interrelate is not necessarily the same way on the backend as it is on the UI. So you end up creating custom REST endpoints for complicated UI that does not map directly to how your objects are related in the database.

This article has no substance in how they want to solve it besides just spouting "use REST correctly".


> The problem when using REST "correctly" is that the way your objects interrelate is not necessarily the same way on the backend as it is on the UI. So you end up creating custom REST endpoints for complicated UI that does not map directly to how your objects are related in the database.

Not defending the article here, but I don't see this as a problem at all. Yes, you API objects should not necessarily map directly to your persistence layer; that is a good thing. The problem domains of the UI and the persistence layer are completely different and should therefore be modelled differently. Your API mediates between the two, acting like a repository for your UI models and abstracting away the persistence layer completely.


You are exactly correct about the role of the API here. A problem arises once you have multiple clients, each with different views, as well as different versions of those views. You can end up with a proliferation of API endpoints, each serving a custom view object for each different type of view, or you make your endpoints generic enough to serve all views, thus serving more data than any one view will ever use. You'll naturally have a tension between endpoint reusability and specific usefulness. The article actually seemed to have some of that tension itself, saying both "If you only want a subset of that graph, make an endpoint around that subset." and also "[Ad hoc endpoints] aren't REST", without giving any guidance as to how to make endpoints around arbitrary subsets of graph data without descending into being ad hoc. I'm not saying that you can't balance these with a good REST API, just that there is a real tension.


Right, you shouldn't feel like you are "sending a SQL query to the backend" as OP suggests. An API is an abstraction layer, you shouldn't have to know anything about the database to use it.


Actually, they mention using content type negotiation for versioning, content types as a type system, HTTP caching for speed and reducing the number of requests, and lots more solutions.

The article actually makes some well reasoned points, and I think poses a very good challenge to what Facebook are doing.


I don't think Facebook ever claimed that HTTP was insufficient. They claim that HTTP/REST does not scale in large and dynamic projects, and Relay/GraphQL solves some of those scaling concerns. Given how strongly Relay/GraphQL is resonating with developers, I would tend to agree that the raw HTTP way of solving these things (as you note above) are not good enough for the needs of most projects.


> They claim that HTTP/REST does not scale in large and dynamic projects

This is my point, the blog post has a good HTTP/REST based answer for all or most of the criticisms Facebook has of it, which I think poses a challenge to Facebook.

> Given how strongly Relay/GraphQL is resonating with developers

I'm inclined to avoid appeal to the majority, after all, PHP is very popular.


I think the appeal to majority goes the other way in this case. The article insists that existing technologies which dominate usage today (ie. HTTP) are good enough, and that Facebook's innovations are misguided. The authors have previously made similar claims about the DOM (vs. React). And after all, HTTP comprises of a far greater majority of server architectures than Relay/GraphQL does at the moment.


The versioning section of the article made absolutely no sense to me. Yes, HTTP has a mechanism to encode version info. He completely ignores Facebook's claim that dealing with that complicates server side code (because it has to distinguish between different versions), but goes on a rant that because they don't understand that the protocol allows to encode this (why ever Facebook's developers wouldn't realize this) they don't understand REST. No answer to how this magically makes the server side easier, completely ignoring what I think is the argument.

And really, most of that article feels like this. His facts are not wrong, but they don't match the claims they supposedly refute. Or if they do occasionally, it is drenched in so much rant-sauce that it gets covered up.


For those points to be well reasoned, one would have to assume that no one at Facebook has tried those things, though. If they really are integral to what REST is, then it doesn't really make sense that no one there knew what it was.


> For those points to be well reasoned, one would have to assume that no one at Facebook has tried those things, though.

Not quite. These points were well reasoned given the information Facebook has made public. If they have tried those things (which I fully expect them to have done), all that means is that there is an answer to these challenges.

To clarify: I expect that Facebook has a rebuttal to these points, however my guess would be that the rebuttal will be more Facebook specific than the motivation for these technologies that they specify publicly.


As much as I agree about your argument, that's going to be a problem no matter how you do it. You will always run into an impedance mismatch. The problem is, which area should drive the solutions of another area? You'll never get a "good" answer.


> The problem when using REST "correctly" is that the way your objects interrelate is not necessarily the same way on the backend as it is on the UI. So you end up creating custom REST endpoints for complicated UI that does not map directly to how your objects are related in the database.

I see this as a good thing. In my mind, an API is made to be consumed from several different client, so it should make sense on it's own and not try to follow how objects are mapped onto one UI. However, it's also frequent to include "macro-endpoints" sided with regular routes to simplify a succession of common simple side-effects (like register and then login). That's not "correct" REST, but not a bad thing either.


Well slow down for a second...

The web is a big public, interconnected network of documents and links between them.

Relay is encouraging single-page applications that use private data and privileged authentication schemes. That is, you have to login with Facebook or create your own private authentication services. The state that you're requesting with with Relay and GraphQL and using to build your HTML is dependent on personalized data and doesn't have much to do with linking to public URLs.

You can most definitely use React to build static HTML and work within the context of HTTP verbs [0] [1], but it pulls you away from the Flux pattern. You don't really benefit much from Relay/GraphQL if you're building web applications based on an interconnected network of documents and links between them... REST is a great pattern for those kinds of applications.

But yeah, I definitely see how a combination of React, Flux and Relay/GraphQL would inherently push an agenda that has very little to do with HTTP and the public web. Not that there's anything wrong with this, as there are many valid use cases for the web browser beyond the original intentions of the web, but it is a good thing to keep in mind.

[0] https://github.com/williamcotton/universal-react

[1] https://github.com/williamcotton/browser-express


He also lost me when he attacked React. React makes developing web UIs sane. It's a web UI framework written by grownups who took a CS class. UI is a pure function of state. Why is that so hard?

The fact of the matter is this: the web was never designed to be a rich application UI. Web UIs of any kind are a hack on top of the web. By definition they are embracing and extending because they were never there in the original HTTP/HTML spec to begin with. They're so popular because deploying actual apps to endpoint devices is so horribly painful by comparison.

This guy also doesn't understand what true embrace/extend tactics look like. Embrace/extend/extinguish is when you embrace something, extend it, and then drag the market into a state where the use of other things becomes impossible. Nothing Facebook is doing is doing that to the web as far as I can see. Web sites that don't use React, Relay, etc. work just fine.

If anyone is doing E/E/E against the web it is Google with Chrome and all the complicated nasty standards (e.g. the WebRTC mess) that it's dragging everyone into that only Chrome implements well. But even there I think it's a weak argument. I use Safari and Firefox and the web works fine in those 99.9999% of the time.


I love when one engineer thinks he's smarter than ALL of the second largest Internet company's engineers put together--when he doesn't even have the context of working there to understand their reasoning. Clearly Facebook is evil and pays their employees solely to make us learn new technologies purely out of spite, not to improve their product better (which they get paid very well to do).


> This article is a load of bullshit.

Me and most people on the comments seem to agree, makes me wonder why is it on the top of the frontpage.


I have a problem taking seriously any article that spreads FUD by accusing their targets of spreading FUD.

The people who build React and the related ecosystem are incredibly smart people trying to make the web better. I'm certain there's no ill-intent in their motives. Regardless, they open source pretty aggressively. If an open technology from Facebook displaces another open technology from another source, I don't understand why we should be upset simply because the backer of the more popular technique was Facebook.


This site has a habit of doing that, it seems: https://twitter.com/vjeux/status/655128064754499588

The reasoning in all three of those posts feels super shallow; I have the most experience with React, and he's setting off a bunch of alarms in my head. "We don't need React because we have Web Components"--if Web Components were as high-perf ("they're getting faster so we don't need workarounds") or as just-plain-easy to work with (crickets!) even for a JavaScript dolt like me, maybe that'd be true. "But but open, but Facebook evil" is not a good-enough reason to use things that don't offer clear benefits. When Facebook twirls its evil mustachios and suddenly Embraces and Extends with...their...BSD-licensed software, I may worry. Somehow I'm not betting on that.

(He might even have a good point about the tight coupling of React, except that React, and indeed all of my JavaScript, is the presentational layer, and anything important is behind the API anyway.)


They appear to be middle-level freelancers, where convincing your clients that you are really smart and making a lot of noise in social media is more important than actually being really smart. (I am also a freelancer, so i'm not trying to be snarky so much as point out the incentives, the article is clearly not meant for HN)


Couldn't have said it better myself. I'm sure there are some great points in this post but I can't get over the obviously biased perspective (and borderline ad hominem attacks on FB).


> If an open technology from Facebook displaces another open technology from another source, I don't understand why we should be upset simply because the backer of the more popular technique was Facebook.

Facebook's future does not depend on the web being open, and in fact depends to some extent on it being quite a closed system. This makes them a poor choice of steward for core technologies in my opinion.

I think the article's point is that the open technology from Facebook is a poor one in the wider context of the open web, and therefore if it does displace other models, that will slow us down and cause issues in the long run.


The web has been getting worst every year, mostly due to these companies spreading technologies to fix a web that is "broken" but who nobody but themselves complained about.

I agree wholeheartedly with OP, Facebook & cie might have problems with certain components of the web, but they are neither broken or in need of a fix, that it doesn't fit their need sure, but their needs are, well, theirs.

I can count on one hand the number of tech that came out in the past 5 or 6 years that are actually an improvement on the old if you are not Google or Facebook.

What OP should have said may be is, stop trying to pass your problems as everyones problems. 99.9% of the web can function with the way the web was 5 years ago, if not more. Hell I was operating at high scale using standard stacks 7 years ago and I don't see why or how the situation has changed at all to warrant any of the bullshit technologies that are constantly touted as a fix.

Developing for the web has become more of a nightmare than anything else because of the likes of Twitter, Facebook or Google pushing their stack on people who never needed it.


Are Facebook actually passing their own problems as everyone's? They're making technologies which suit their needs better, and other people can use them too, but they're not forcing them on anyone.


I've spent many months over the past few years trying to really understand REST in order to design a good API and it never quite sat right - there was always something that seemed like a hack.

I'm not going to pick through the OP post, except this line:

"a REST endpoint can return whatever you want"

Yep. I know this and have done that: an endpoint to return complex data. And it never felt right - either I make a bunch of special endpoints losing all of the nice consistency of REST, or I have endpoints that return too much or too little. Or I have a highly parameterized endpoint that essentially lets me pass it a query (cough like GraphQL!)

At the start of the year, I was thinking about how I would like to write my applications client and what I (conceptually, I never got to implement it) came up with was rather similar in concept to Relay/GraphQL in that UI components declaratively state what data they are interested in and the system keeps it in sync by querying the server similar to how Relay/GraphQL do.

I personally find that model much nicer than REST.

But... as a consumer of third-party API's, I do love REST and worry about what a shift might do. However, from a SPA application development point of view, I love the concepts behind Relay and GraphQL more than REST.

But hey - I guess I just don't understand REST well enough (actually, I really don't - I'd be the first to admit it!), despite spending a long conscious effort in trying to. Which is the problem: this mythical "proper" REST seems too difficult to understand and use, so everyone kinda just makes up their own flawed flavour and calls it REST.


"REST" as we've come to know it is just a useful design pattern for APIs. That's it. It's not a law. You probably don't lack understanding of REST, every developer just has their own interpretation of edge cases (multi-resource responses, etc.) and different experience with what's worked and what hasn't.

I wouldn't get caught up in the details. Use whatever tool is appropriate to solve your problem(s). I think it's likely we'll find certain design patterns (GraphQL vs. REST) are best suited to projects with specific problem spaces or scope. Maybe it's always best to start out "RESTful" and begin migrating to a different interface as your app scales in complexity. Maybe the latter problem is only an issue with medium or larger projects. Perhaps it's best to teach developers to be strict with REST to develop good habits and start migrating away once they're confident and comfortable with their ability to make architectural decisions.

Either way --- don't sweat it. Experiment with what you like because you're the creator. Figure out what works best for you, and you'll be able to identify wins and challenges and decide for yourself how to use the technologies around you and what they bring to the table.


You're like the most well-tempered comment in the entire thread.


Here's a quote which I think well describes Facebook's stance on REST, before GraphQL and Relay were public. This is the idea that motivated GraphQL and Relay.

"REST does a shitty job of efficiently expressing relational data. ... I mean REST has its place. For example, it has very predictable performance and well-known cache characteristics. The problem is when you want to fetch data in repeated rounds, or when you want to fetch data that isn't expressed well as a hierarchy (think a graph with cycles -- not uncommon). That's where it breaks down. I think you can get pretty far with batched REST, but I'd like to see some way to query graphs in an easier way."

-- Pete Hunt in 2013, when he worked at Facebook on React: https://news.ycombinator.com/item?id=7600565


A query on a graph or a tree can both be RESTful. You can describe traversals of the graph or tree in a query embedded in an HTTP endpoint. You don't need to make repeated round trip queries to do the traversal unless you are unwilling to use a suitable query description.

Metaphorically this is all pretty funny becuase the web is already a graph at a fundamental level.


The problem is that traversing a HATEOAS graph (like the www) requires a request at each edge. (The query is only the entry point to the graph, it doesn't help you traverse the graph. Relational joins make it so you don't need to traverse the graph, but this requires a lot of custom endpoints that bake in up-front knowledge about which nodes of the graph will need to be pre-fetched)

edit: Either you edited or I misread, but "You can describe traversals of the graph or tree in a query embedded in an HTTP endpoint" - this is basically what GraphQL/Relay is: an easy way to do arbitrarily nested or recursive queries, like those you find in real complex applications.

(Plug: I am a React consultant, hire me!)


As someone with a bit more experience with these technologies, would you please describe what GraphQL actually has to do with graphs? I work with very, very large graphs. I looked at GraphQL to see if it could be of use, but as far as I can understand the graph aspect of GraphQL is marketing multiplied by doublespeak. RDF and Sparql seem to be much more apt systems in this space. So I would appreciate if you opined biased to disillusion my confusion.


GraphQL lets you represent your API as a graph. You do this by defining a set of types that have fields in them which resolve to instances of other types. For example, if you have a "Person" type with a "friends" field that resolves to more people, you could traverse this as deeply as you wanted, like so:

query { me { friends { friends { friends { friends { name } } } } } }

That would get you the names of my friends of friends of friends of friends. It's a directed graph where you can traverse as many edges as you want in any direction in a single query. It's like a power tools version of your usual REST API.

What more are you expecting?

(in relay this would look a bit different due to the "connection" spec, but you're free to define a graphql endpoint this way too)


Haha your edit is dead on. Your parent comment arguing for REST just reinvented GraphQL.


LOL, yes that is a very good point. And how does the web solve this, by reference :-) Which is of course how you always solve cyclical graph representations in my (for sure limited) experience.


I have a near-immediate distrust of anyone who says "you're all doing REST wrong." I've built my share of REST APIs. I've seen firsthand the tradeoff they face as more use cases are added, between letting the payload size keep creep upwards or exploding the number of endpoints. I like what GraphQL is doing to fix that.

On the other hand I've never seen an API in the wild that I felt would really satisfy a REST purist. In theory, it'd be great to be able to stick a dumb HTTP cache in front of your app and have that solve all your performance and scalability problems. In practice, you have to sacrifice too much.


> a REST endpoint can return whatever you want. If you only want a subset of that graph, make an endpoint around that subset.

So we have to make endpoints around every possible subset?

> You want to avoid large object graphs anyway, for two reasons. Returning coarse-gained chunks of data tends to work against effective caching strategies, whether you're using REST, Relay, or anything else. You have to cache the whole chunk or nothing.

I thought the whole point of Relay et al. was that it would cache the union of the chunks, such that when I request more data, it can download only the exact pieces I need?


"So we have to make endpoints around every possible subset?"

No, you use the query in querystring. One way or another you're offering a query interface. A lot of REST is just about how you offer the same interfaces you were planning on offering anyhow. Which is probably a source of a lot of the frustration REST advocates experience. If you're genuinely doing something that REST can't do very well, OK then, but if you're doing something that HTTP can already do, why layer something else on top of HTTP that HTTP is already doing?

(Sometimes there are good answers to that question, though.)


> No, you use the query in querystring.

Ok, but doing that straight on top of HTTP, will only cache queries that are exactly equal to previous ones. Not queries that are a subset of previous queries.


I think you have to be careful not to be too cavalier with the word "cache" there. While REST is sort of vaguely concerned with the ability to cache in the general sense, in real life we have to be specific about what is doing the caching. This is the sort of dynamic query you'd normally tell intervening proxies not to cache anyhow, so now we can reduce our concerns to the client and the server specifically. On the client, if it asked for a big chunk of the graph and now need to operate on a smaller on, I'd suggest that if the client does not want to incur another round trip, it would be incumbent on the client not to ask another question. (This is further backed up by a consistency concern; if I ask again, I may get a different answer because it has actually changed in the meantime. You may be better off with your code not even looking at the changes.) On the server, caching technologies can easily be arranged (if the server cares) to cache more granularly than is necessarily reflected in the answers. (In fact most of my "cloud" caches I'm currently responsible for do that all the time.)

I actually don't deny there's a use case here necessarily, but it may not be all that large, even to Facebook.


I was thinking of a large graph, where everybody wanted different parts. Then a cache somewhere in the middle could work, even if nobody wanted exactly the same parts.

However I guess you could achieve the same by sending many parallel requests.


If the [data returned following a] query is a subset [of data points already returned] why do you need to get it again; if you actually need to re-retrieve it then it being cached is no use unless you dumped it from local cache mistakenly - that shouldn't be an often enough occurrence to need to cache the data. Or did I miss something?


I'm thinking about the case where you have a cache between you and the server, shared by multiple users.

I imagine this is useful at facebook, for example for loading parts of public peoples pages, etc.

You are right, that if the data is only used by you, you can just have an advanced cache in the client, sorting it out for you.


> So we have to make endpoints around every possible subset?

As I was reading this I was trying to figure out how he missed the point so badly, I think your point does a good job of that.

The OP might benefit from adopting the strategy of when you see someone saying something crazy/stupid, stop and think before writing an inflammatory blogpost.


> So we have to make endpoints around every possible subset?

Pretty much. GraphQL is conceptually the same as just letting your clients send an SQL query to your API. Relay seems a short step away from having a graph database engine running client side, a storage engine on the server and a remoting layer to glue them together.


s/catch/cache/ ?


Thank you.


> So we have to make endpoints around every possible subset?

No, you make endpoints for what clients actually need and use.

qyv has it right https://news.ycombinator.com/item?id=10413950 - responses are UI models and when you "model the problem domain of the UI layer" you're asking "which chunks of data do clients really need?"

If you really have clients that ask for "every possible subset" then this is a requirement that is completely unrelated to REST. i.e. you can try at solving it with REST, or without REST. The article actually says that: "If you're requesting some subset of a complicated object graph, and constantly re-negotiating which particular subset, then you have a problem which has nothing to do with REST."


Jafar Husain does an excellent job of explaining some of the problems netflix has had with REST APIs and why you might want to take a different approach in his introduction to Falcor video. Falcor is tackling some of the same problems as graphql / relay but making trade offs at different points.

https://netflix.github.io/falcor/starter/why-falcor.html

A REST API may be the best decision for a lot of projects but GraphQL and Falcor are solving interesting problems that come up more and more in single page apps of increasing complexity.


It's interesting to read the article with his presentation in mind. From a technical perspective, you should be able to just swap Netflix/Falcor in for Facebook/Relay in this article and have the arguments make about as much sense (They are both graphQL adjacent approaches to APIs that explicitly abandon RESTful principles). However, it seems to me that the article would be way less persuasive or interesting if you did that, because a lot of the intended force of the article relies on the premise "Facebook is evil", rather than the technology at hand. However, using "Facebook is evil" to prove "This thing that Facebook is doing is evil" is way less interesting than showing the latter just from technical details.


I'm not an expert on REST by any means, but it seems like that guy is sort of intentionally misrepresenting what they're doing. They didn't say Rest isn't good. They said that Rest isn't good for what they're doing with it. If you don't like it, don't use it.


An interesting comparison might be to something like sending protocol buffers over the wire instead of JSON [1].

You can make a 100% valid and spirited defense of JSON (human readable! lots of open source tooling!) which at Facebook's scale is trumped by "well it works 20% faster and we have 400 million users that will see the benefit".

And that scale aspect of this really matters. OP is completely correct in the criticisms that X_NEW_THING can be done in REST, but with the number of users + infrastructure that Facebook is dealing with, small improvements can really matter.

1 - https://code.facebook.com/posts/872547912839369/improving-fa...


fwiw, pandastrike seems to be on a crusade against all things Facebook/React so take this with a grain of salt. https://twitter.com/vjeux/status/655128064754499588



If you loved this post, I highly recommend this book! http://i.imgur.com/n4ogzt4.png


By the time I finished this article, I hated the authors' superior, paranoid, whining tone so much I found myself hoping Facebook does destroy the open web just because it would annoy them.


Two biggest issues: 1. This just reeks of "No true Scotsman". My biggest issue with this REST purist rhetoric is that it seems no one can actually implement REST, but apparently if someone could it would solve most/all of our problems. But after many years of smart people unable to implement REST properly according to these critics, then maybe the problem is with REST.

2. "If you implement REST, you use content negotiation, and thereby eliminate versioning and typing issues"

The author is conflating the issue of identifying a client version and actually responding with appropriate client specific version data. It's one thing to identify which version via some mechanism (e.g. URL, query string, content type), but you still need the latest code to handle all of the versions you want to support. The author seems to believe that that using content type will solve both problems, whereas the real problem is actually having an implementation that responds with the right client specific data and not break between releases.


I have to agree with this article. There is a tendency in tech to reinvent what we don't understand properly, and from experience of using third-party APIs (and from my own ignorance too) - HTTP is the thing we seem to know the least about (well actually one could argue TCP/IP is ...)

I read this article from top to bottom and I didn't feel the comments were extreme or purist. It did leave me thinking we need to spend more time learning and sharing knowledge of HTTP /REST than dismissing it.

Of course, the headline is clickbait - and that is the only real criticism I had.


We could assume that, yes, everybody, Facebook included, just doesn't know REST (and implicitly assume that REST is an inherent good in the process). Or one could go "you know, Facebook has a lot of insanely smart people and their engineers are on record as having had problems with REST around stuff like complex object graphs and cycles, so maybe their solution is worth looking at."

If you have to say "oh, this thing is universally great, you all just don't understand it," I will bet hard against you at the first opportunity and I'll usually win in the process.


A large group of smart people often produce some really dumb ideas, like 'Google Wave' - so I wouldn't personally use 'Proof by Large Group of Smart People' in this case. Hacker News has been the home of many a bad idea by smart folk.

Instead, maybe we should all be critiquing the points raised.

In my limited experience, most engineers (yes including very smart ones) are more enamoured with their own ideas (and not discounting myself here!) than learning other people's work well. And the critique I read didn't seem unfair in that respect, more so it encouraged me to look deeper at HTTP first before I add something into my apps own protocols. Which to me is a good thing.

Especially around content negotiation which I had not looked at beyond a superficial understanding.

For the record I ain't no purist :-)


Right, but if your argument is that they just didn't understand REST, and yet they employ some of the smartest people around, so if they can't understand REST, what chance do the rest of us have?


It's a shame that this article spends most of its time ranting and throwing around irrelevant attacks at Facebook (we get it, you don't like their product). If I buy the author's premise that REST does everything graphQL does and that everyone is just using REST wrong, please show us how to properly do it. Now I have to take the article's word for it, without any practical explanation how to correctly design a REST API.


I think there were some basic pointers in there, but yes I too would have liked some examples on how to do things better.

As much as I dislike people critiquing from a purist point, I also dislike my own ignorance on things that could keep my life simpler.


Seems to me like the article and Relay are broken.

1) Why the new declarative query language for fetching attributes (columns) from objects (tables)?

2) Why build yet another transport mechanism on the top of HTTP, when there are so many to choose from already?

3) Oh, my god, boilerplate everywhere! What is this, Java?! /s

Seriously, though, the OP has some serious issues with Facebook, but to be fair Relay stinks badly of NIH. They had great success with React (a love letter to PHP), followed that with Flux (more of a design pattern than anything else), but this third attempt seems like it's trying too hard to create yet another data transport system.


> Why the new declarative query language for fetching attributes (columns) from objects (tables)?

SQL isn't very composable- It was the right idea, but there's been a lot of progress in the last 50 years on how to traverse a database. It is all string-based, and it lacks a "pull syntax" (i.e. after the query completes, give me this graph-represented chunk of the result)

> Why build yet another transport mechanism on the top of HTTP, when there are so many to choose from already?

Relay is mostly agnostic on what mechanism is used to fetch results.

> Oh, my god, boilerplate everywhere! What is this, Java?! /s

Yeah, it's kind of awkward to do this stuff in Javascript- Look at what people are doing in more flexible languages, like here https://github.com/omcljs/om/wiki/Quick-Start-%28om.next%29


There are very key points about GraphQL/Relay that the author of this post is missing.

For a Facebook user, all client-side cached data is stale. All of it. Posts are edited, profile pictures are changed (and those changes are referenced in other posts that must appear to be consistent with the image shown to the user). This is key to their value proposition - Facebook gives a new experience each time the user comes back to the site, thus the user will check the site multiple times a day. Now, you could cache if you could invalidate the caches... and this is fine to do on the server side, but it's downright impossible to push a client side cache invalidation at Facebook scale.

Another key insight is that "HTTP can fire network requests in parallel" is insufficient if your requests are dependent on each other's results. Let's say you want to crawl a graph of nodes. Are you going to do a BFS with a full HTTP round trip at each level, collecting touched nodes and transmitting the border each time? When the latency of that round-trip is at 2G speeds? How would you handle this in REST, under the assumption (as above) that it's fine to let the server handle caching?

Well, you might use the whole "HTTP's content types are a type system" concept that the author loves, and annotate your REST request with the structure of data you want back. UserWithPostsWithCommentsWithLikes is a type, after all, just as UserWithProfileInformation is a type. And at that point you don't want to name each of those separately; your Content-Type header would have some structure; it would almost read like a programming language. Oh, and you happen to have a server that can interpret that Content-Type piece by piece. You would eventually realize it's just more sane to break that out into the body of a POST request rather than holding on to REST just for REST's sake. And then, boom, you have GraphQL.

The post devolves even further towards the end, making a dubious link between Facebook's developer tooling and its grander ambitions. Making realtime graph search, and realtime insights from the graph of entities, effortless to the developer and the user... that's one way in which the web is evolving. Every company needs to be able to think outside the box. If everyone took "don't reinvent the wheel" completely literally, we wouldn't have airplanes.

Do I have gripes about the GraphQL syntax? Sure. Do I wish it was more like JSON Schema? Does grokking Relay migrations feel like reading a James Joyce novel? (With apologies.) Absolutely. But... are GraphQL/Relay a needed proposal to push innovation on the web forward? Yes.


Does anyone know of really good REST APIs?

I feel like a lot of people talk about REST being better or worse than other technologies, but I haven't seen a single 'perfect example' of a REST API. If that's because it's impossible to make one, then that clearly tells us something, but I can't see any reason why we couldn't make one, it just seems that no one does.

By "really good", I mean something that includes HATEOAS, versioning, uses content types correctly, utilises caching properly (probably HTTP 2 only with server-push, but still), etc etc.


This article is implying that a group of the best engineers of the world (Facebook and Netflix engineers came up with similar solutions with Relay and Falcor), are just dumb and don't know how the web works.

There are motivations behind those solutions, and are documented across videos, papers and documentation. It seems to me, that the author is ignoring the previous experience of such teams.


tl;dr: lol.

The author is claiming Relay is a scheme to kill the open web. A simpler theory is just that Relay solves a problem Facebook (and many other people it seems) are having. The idea that REST has shortcomings for Facebook's use case is an unthinkable thought for the author.

No argument is given for why we should consider REST to be a foundation of the open web. REST != HTTP.

The post is full of logic-defying assertions like "Facebook wants to replace REST with Relay for the same reason they want to replace DOM engines with React Native."


There seems to be a lot of attacking the messenger rather than the message in these comments. Interesting.


I think because the tone of the article itself is so horrible. I honestly couldn't read it.


Facebook is not trying to kill REST. It just doesn't work well for their use-case. So, they make their own system that suits them better.

What's wrong with that?


The point of the article is that Real REST does work well for these use cases (which is debatable, but that's what it says).


That's what the author is trying to say, but it comes off as defensive. Sure, REST probably can do these things, but by the sounds of things it's not ideal.


Why is this upvoted so much? It's a complete strawman.

Facebook's own intro at https://facebook.github.io/react/blog/2015/05/01/graphql-int... states:

"There is actually much debate about what exactly REST is and is not. We wish to avoid such debates. We are interested in the typical attributes of systems that self-identify as REST, rather than systems which are formally REST."

"We believe there are a number of weakness in typical REST systems"

The author of this article is talking about REST in general, while Facebook specifically is talking about typical implementations of it.


Read the whole article. The author addresses that exact quote.


To me that's a weaker point of page. I'd rather read about why Facebook couldn't design a REST API that meets their needs, not why some sampling of "typical" REST APIs in the wild doesn't have the characteristics they want. Sure, they might not, but can FB build a REST API that does have the characteristics they want? The answer could still be no, of course.


Sounds like the author spent a lot of time learning REST and is now mad at the risk of learning something else and throwing REST through the window :p


I find it rather problematic that this story moved from within the top 10 posts down to currently 78 in a matter of a couple of minutes. ..... Not that anyone will see this now.

Facebook is an evil company headed by a psychopath that has said he wants to replace at the very least what people perceive to be the internet. You are killing the internet with every single thing you do on or with Facebook.


While I'm anti Relay/GraphQL/co , I see a problem right here :

> Blaming REST For The Non-RESTfulness Of Fake "REST"

As I said in another thread here, people don't get what REST is because the original paper has done (IMHO) a poor job as explaining what it is about,masking simple concepts behind complicated sentences (academia style).

Instead of being pedantic,arrogant and borderline insulting , which is unnecessary to make a point, it would be simpler and more useful to say that it is just about taking advantage of "natural" HTTP features and some web concepts such as hyperlinks. So there is no such thing as "fake REST" if one's intent is exactly that. Me not using content negotiation for API versioning doesn't my API fake REST.

Too bad because the OP makes a few interesting points , but they are drown into some hateful garbage.


> the original paper has done (IMHO) a poor job as explaining what it is about

Please read <http://roy.gbiv.com/untangled/2008/specialization>. The paper is for experts. Non-experts can learn from a suitable technology book instead.


Complete strawman. Facebook is quite clear that GraphQL is designed to address weaknesses in REST as it is typically implemented, not weaknesses in the actual concept of REST.

> We are interested in the typical attributes of systems that self-identify as REST, rather than systems which are formally REST.


The Web and HTTP were designed as support systems for a set of hyper-linked documents. Javascript came much later, and it's of little surprise to me that devs are having a hard time figuring out how to make them work together. The article is FUD, but some points are worth a read.


One of the main advantages of Relay/GraphQL is that it lets front end devs iterate quickly without worrying about building new endpoints on the server side. Less context switching is a win when building out new features quickly.


These people have been attacking Facebook for a long time now. First React, and now Relay/GraphQL. Their technical arguments were never strong. Claiming that existing technologies eg. DOM, HTTP/REST are good enough is not convincing (disclosure: I very much so buy Facebook's arguments).

But do they have a point in saying that Facebook is trying to control the web stack? I believe that Facebook are working to make the web better as a whole, but is this naïve? A better web for everyone helps big players like Facebook. Hopefully one day Facebook doesn't slowly start to close off these huge OSS projects.


Author says "they made a series of claims about why REST is broken" - without making "claims" be a hyperlink to those original claims. So this whole piece lacks semantic context and is therefore a fail.


You are mistaken. Near the end there is a hyperlink to the claims document. It is titled "Facebook's terrible post" and points to the URL <https://facebook.github.io/react/blog/2015/05/01/graphql-int....


Sometimes an API can do something, but doesn't it make it easy. An alternative that makes it easy isn't just a little valuable, it's a lot a valuable.

This also applies to correctness. An API that can do something may be a bit of smashing a square peg through a round hole. An alternative that is round peg through a round hole is much better.

I don't know enough to say what this situation looks like.


There are PLENTY of problems with REST. You would be a fool to think Facebook is too stupid to understand why REST is perfect.

If I were FB, I'd only feel compelled to use REST for a public API. Otherwise, I'd use the best tool for the job.


How did this article get flag killed?

I wanted to say something constructive about the topic: Any time you layer an arbitrary UX over a given data model, expressiveness is absolutely necessary in the protocol in order to minimize round trips. This is why SQL has been so successful on the server side.

An alternative to the "moving the whole graph to the client side" approach is the intercooler.js way [http://intercoolerjs.org/]: HTTP requests express UI/UX needs, rather than generalized data queries to support a client-side object graph and model.

This eliminates a whole swath of complex issues that come up with managing multiple instances of data in a distributed object graph environment, knocking system complexity down an order of magnitude.


You should probably mention that you're the main contributor to intercooler.js. I've noticed quite a few comments from you plugging intercooler.js but with no mention of the fact that it's your project.


Sure: I am the main contributor to intercooler.js, I use it every day in production and, when it is relevant to a given article, I mention it.


The tone of that article doesn't make me want to join their team...


in 6 years here this is the first post i felt compelled to comment on without even reading the article. the comments here reinforce my decision...

if you have read the GraphQL docs, and built a sufficiently complex REST API, there is literally, technically speaking, no way to support this claim.

still i skimmed the article... it must be satire.


Interesting how this article was #1 on HN a few hours ago and mysteriously disappeared.

Conspiracy?


Flag killed.

The unanimity of the commentary on the article (which was ranty, but had some reasonable points to make) is disturbing as well. No one is willing to take the other side of the argument?


There was apparently only a 30 minute window to do so. HN moves too fast for nuanced counter-replies to the commentary. If I just made a top-level comment, it would say something along "I agree in broad strokes", which is not that useful.


And, from the looks of it, you would have been downvoted into oblivion for your trouble.

So there was a zerg rush of negative ad hominem on the article and then it was flag killed despite being a reasonable technical topic to discuss. Good times, good times.


Can someone explain to me how Relay compares to gRPC?


REST is one of the worst tech religions ever created yielding the most blinkered zealots. It's treated like a Bible where every word is taken as the Gospel truth that can't be tested, validated, compared or improved upon. Any alternative technology that reduces latency, improves performance and end user experience is considered an evil intrusion invalidating the purity of REST and must be vanquished.

In the name of REST, practitioners give themselves a free-ticket to develop large, over-architected, dumb chatty high-latency solutions at the expense of the end user as long as tech choices are made within their interpretation of REST. Normally technology serves the client, unless you're a REST zealot in which case what the needs of the client is secondary, its more important to obtain Internet kudos points by forcing your way up the maturity ladder.

No we must develop and shoe-horn all App and User experiences within the constraints of an ambiguous thesis that was built to link and update documents and create server-driven turn-by-turn apps. The fact they can't correctly interpret what different parts of REST means amongst themselves have generated programmer-decades worth of wasted discussions in the most useless bikeshed ever.


This subthread started out inflammatory and turned, predictably, into a ridiculous flamewar. Please don't do that here.

We detached this from https://news.ycombinator.com/item?id=10413927 and marked it off-topic.


Well said. My blood boils thinking back to all the times I've heard "Yeah, but that's not REST." as a response to a perfectly reasonable and sane idea/suggestion.


  REST is one of the worst tech religions
  ever created yielding the most blinkered
  zealots... Any alternative technology ...
  is considered an evil intrusion invalidating
  the purity of REST and must be vanquished.
What a reasoned, sound, rational discourse. Good thing zealotry should be shat upon.

  No we must develop and shoe-horn all
  App and User experiences within the
  constraints of an ambiguous thesis that
  was built to link and update documents
  and create server-driven turn-by-turn
  apps.
Wow. Pretty harsh, considering that the author of said "ambiguous thesis" is also one of the handful of people responsible for defining the very infrastructure which makes your diatribe possible to disseminate. Along with some guys named Tim[1] and Henrik[2].

Perhaps, instead of using this "shoe-horn-mandatin', ambiguous thesis thinkin', can't-force-their-ideas-on-everyone" HTTP "religion", you'd prefer whatever Facebook tells you is better?

Oh, and ya might want to do that with something that doesn't use HTTP.

Coz if you did, then u r p0wned by teh man.

1 - http://www.w3.org/People/Berners-Lee/

2 - http://www.ietf.org/rfc/rfc1945.txt


> Wow. Pretty harsh, considering that the author of said "ambiguous thesis" is also one of the handful of people responsible for defining the very infrastructure which makes your diatribe possible to disseminate.

Actually no, TCP/IP/DNS is the backbone of the Internet and what made the Internet possible, HTTP was invented before Roy dropped his thesis. People like to give Roy kudos for inventing the Internet which re-inforces the "Bible" concept REST-afarins like to re-enforce as their blindly ignoring any other superior technology that's not deeply rooted in a REST philosophy.

Here's a quote from Alan Kay on the Internet as it was invented in 1969 - which handle billions of nodes that has never been stopped after it was turned on and had all its atoms replaced:

> The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free?

Take note the lack of any mention of REST. This is what he has to say about the tacked on "Web" you're attributing to the Internet:

> The Web, in comparison, is a joke. The Web was done by amateurs.

The thing that makes the Internet possible is its infrastructure, the thing that makes the web platform powerful is modern web browsers - i.e. the things that only the largest tech companies spending decades of developer effort and investing 100's of millions can do to maintain a competitive browser. Which Despite its primary focus and decade-long head-start it's still getting eaten by Native Mobile Apps and not because they better at adhering to the constraints of REST, quite contraire, they're not grounded into the turn-by-turn per-request model and just focus on providing the best end-user experience they can using the most suitable technology for each task.

HTTP is a conduit sitting in the middle, adding unnecessary overhead on each request that's great for linking documents and composing static content sites but is poorly optimized for Responsive or Interactive Web Apps.

> you'd prefer whatever Facebook tells you is better?

No people should learn to think for themselves and use their experience to pick the best tool for the job, not blindly follow mindless preachers who can only think in REST ignoring anything superior that can deliver end-users a better experience - i.e. who Technology should be serving, not the other way round.


  Actually no, TCP/IP/DNS is the backbone
  of the Internet and what made the Internet
  possible, HTTP was invented before Roy
  dropped his thesis. People like to give Roy
  kudos for inventing the Internet ...
Hi strawman[1], nice to see you again. I specifically stated that Fielding, Berners-Lee, and Frystyk were responsible for defining HTTP. A fact well documented by the aforementioned RFC-1945 reference. Since your reply was submitted by some form of HTTP client (a.k.a. "web browser"), by definition it used said protocol or the derivative "HTTPS" protocol to do so.

This is further supported by you acknowledging the same:

  ... the thing that makes the web platform
  powerful is modern web browsers ...
You see, these "modern web browsers" support a thing called "HTTP." And this "HTTP" thingie was defined by a handful of people (this sounds familiar).

Regarding web client performance verses native apps, you state:

  Which Despite its primary focus and
  decade-long head-start it's still getting
  eaten by Native Mobile Apps ...
This largely can be explained by bloated JavaScript-based SPA's[2] downloading megabytes of interpreted code. Want faster performing interaction? Don't stuff dozens of script references into a page which mandates the system to download them before being useful.

How could this be done? One is called Code-on-Demand[3]. But for this to be viable, the system must recognize the server as being a first-class concern which collaborates with its clients instead of being a glorified data base connection pool. But if a person is myopically focused on what executes solely in the browser, this might elude them.

  >  you'd prefer whatever Facebook tells
  you is better?

  No people should learn to think for
  themselves and use their experience to
  pick the best tool for the job ...
Here's a point we agree upon. Nice to end it on a high note.

1 - http://www.asa3.org/ASA/education/think/strawman.htm

2 - https://en.wikipedia.org/wiki/Single-page_application

3 - http://restcookbook.com/Basics/codeondemand/


[flagged]


  Thanks but I don't need a misguided history
  lesson skewed to support your warped view of
  events attributing the creation of the Web to
  your lord and savior who you wish was sole
  entity responsible for the Internet but was
  instead created by the geniuses who built the
  ARPANET in 1969 which is what formed the
  technical foundation and what made the
  Internet possible.
I have no idea where your anger, vitriol, and venom originate. I humbly suggest you find someone professionally qualified to help you work through these issues. Perhaps you are experiencing a persecutory delusion[1], though I freely state I am not a clinical psychiatrist and only present this as a possibility from someone who has never met you personally.

  Your fixation of the HTTP protocol was
  invented by Tim Berners-Lee, on-his-own.
  The first version created in 1991 supported
  the most important GET Request method
  (http://www.w3.org/Protocols/HTTP/AsImplemented.html).
Of course Tim Berners-Lee has been and remains a major force in defining the HTTP protocol and "the Web" as we know it. This is well documented and an indisputable fact. However, as great as his contributions have been and continue to be, he has not operated alone.

What you fail to mention in the statement you quote is:

  This is a subset of the full HTTP protocol,
  and is known as HTTP 0.9.
  (http://www.w3.org/Protocols/HTTP/AsImplemented.html)
The HTTP protocol has not stood still and has had contributors throughout the last 24 years. If you cannot accept this, that is your issue to deal with and not mine.

If you had even read Section 1.1 of RFC-1945, you could have saved yourself the embarrassment of saying "The first version created in 1991", as the RFC reads thusly:

  HTTP has been in use by the World-Wide Web
  global information initiative since 1990.
Remember that one of the authors of this RFC is Mr. Tim Berners-Lee.

You then go on to spew:

  The fact you glorify a simple text
  protocol as the magical thing that makes
  the web possible is indicative of the
  extent of your religious delusion to a
  single technology.
Well, if you cannot accept my explanation of the importance of HTTP, perhaps you can fathom what Mr. Berners-Lee wrote instead:

  Did you invent the Internet?
  
  No, no, no!
  
  ...

  I just had to take the hypertext idea and
  connect it to the TCP and DNS ideas and
  -- ta-da! -- the World Wide Web.[2]
Can you accept/understand that the primary contributor to what you colloquially describe as what "makes the web possible" is, in fact, stating that the web was made by combining hypertext with previously established technologies? See also previous mention of persecutory delusion[1].

  Go ahead and try make versions of Gmail,
  Facebook, Google Maps, Google Docs / Office,
  etc with dumb clients and full page
  reloads on every request ...
The fact that you conflate a REST architecture with "full page reloads" only shows your ignorance. There is nothing precluding an XMLHttpRequest sent from a browser client to a REST endpoint. Actually, it is quite common and the REST server wouldn't know/care if it were done anyway.

  Any fool can create simple a text protocol,
  any fool can create a dumb, server-driven
  client, any fool can document a
  code-on-demand solution.
No, any fool can trivialize the accomplishments of those before them. Any fool can benefit from infrastructure which allows them to be oblivious to how the bytes sent around the world "just work." Any fool can lament about performance on a resource constrained platform, such as a mobile device, and reject possibilities to address them.

  That isn't to say HTTP isn't a well-thought
  out and defined work, it is ...
That must have seriously hurt you to type.

1 - http://psychcentral.com/encyclopedia/2008/persecutory-delusi...

2 - http://www.w3.org/People/Berners-Lee/Kids.html#invent


Your comments have broken the HN guidelines by being personally abusive. That's not allowed, regardless of how wrongly someone else has behaved, and we ban accounts that do this repeatedly. Please re-read the guidelines and post civilly and substantively or not at all.

https://news.ycombinator.com/newsguidelines.html


I see your point and apologize for letting the interaction degenerate to this point. Allowing myself to intermix reactionary statements with this post is a regrettable decision. The wiser choice would have been to "walk away from the thread."


What a rude, hateful, comment. You should probably re-read the site guidelines.


[flagged]


You probably need to re-read the site guidelines.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: