Creating a stack that complex just to render Static Content... Seriously ?
I'm fairly confident that only the author of this project can do something with the codebase.
It must be an absolute mess between Zeplin , Storybook , Apollo , GraphQL , Next, Yeoman etc...
Just why ?
Can't Airbnb invest in one good CMS and tooling solution ? Don't they have a CTO that define the company tech governance and tech stack ?
Isn't building landing page for "Luxury Destination" one of their core businesses ?
Do each Airbnb engineer create their own stack for a tiny part of the website ?
It just buggers me to see something like this and remind me of their 'React Native Fiasco' where they decided to use React Native but their mobile engineers didn't like JS , so the engineers of each platform just wrote the app using binding to use Java or Objective-C.
Sometimes I really tell myself that working for a FAANGS must be awesome, but then this type of content pops up and it just remind me I should either stay at my current job or create my own business to avoid all this.
Airbnb does not, in fact, have a CTO who dictates the company tech governance and/or stack. They prefer to run things in a federated manner, with individual teams making the decisions that they feel are best for them. While they're encouraged/required to draw up design docs and have them reviewed by an architecture review group, the group's recommendations are non-binding.
This model has advantages and disadvantages. On the upside, it creates an environment where people can take risks and do things that haven't been done inside of the company before. On the other hand, it means people sometimes go out on a limb and push the company into supporting something that turns out not to be sustainable in the long term.
As a matter of personal preference, I like to have a set toolchain that a company is built around. But it would be unwise to suggest that Airbnb's strategy hasn't worked out pretty well for them overall.
But I always open the site with some apprehension, since I know I'm in for a bad user experience. It's sluggish, and I can't bring myself to appreciate how the layout and even menus differ (or even disappear) depending on which area on the site I visit. It makes navigation cumbersome, and hard to remember how to navigate between visits.
I would switch to a similar service in a jiffy if it had solved these problems.
I always suspected a lack of a top-down coordination to be the reason for these issues, thanks for confirming. I've come to believe that this kind of loose federation strategy mostly suits junior devs (on which a startup might be deeply dependent by all means), hardly a serious long haul business. I expect they will change policy in due course, or perish.
I don't think AirBnb is alone in this. There are many sites and apps that change UI's radically with disturbing regularity. I've begun to wonder if there is a glut of UX/UI people in tech right now, and that this endless cycle of zero-value-added change is just an attempt to justify their continued employment.
How many risks and how much necessity to do things no one has done before can there really be?
What about recommendations? What about map or multi-constraint search? What about fraud detection/prevention, activation/reactivation email triggers, an analysis system to help hosts be more financially successful on the platform, building an ecosystem of services where people can make money while helping hosts make money, SEM optimization, the ratings/reviews system, the communications between guests and hosts or guests and support, or 20 other things that are likely going on under the covers? Their mostly static content is likely just the visible tip of the iceberg.
Of course, that only accounts for one of the various pages you'll see in the checkout flow, and I agree that not everything is as snappy as it could be. But keeping things snappy turns out to be a very hard problem when you're growing as quickly as Airbnb has.
I don't understand the run for GraphQL everywhere, does everybody query sparse and deep nested data on their website? From my experience, Apollo works well except when it doesn't and then you have a lot of magic going on.
Zeplin works quite well for our team and creates a nice connection to our designers. Storybook on the other hand not so much. At first we developers used it, then we had to update some things for Apollo but Storybook was not ready for that. Now everything runs again but nobody uses Storybook actively anymore...
I have the feeling that the software industry is often driven by personal preference instead of sane decisions. I see projects that use micro services without any reason, using React for static content, dockerizing everything to a ridiculous amount, K8s because why not. All of that because it's interesting for the developer not because it's good for the user.
Exactly. Forced to use GraphQL at work for an internal-only API and what a nightmare - what would be a nice, one-line REST API query can take hundreds of lines (not exaggerating).
If you're serving a public-facing API at massive scale, yes, GraphQL will save you bandwidth. If not, what a pain in the *. For most engineers it's a solution in search of a problem.
I work primarily as a FE software engineer who spends a lot of time researching, using the JS ecosystem. I have gone through graphQL/apollo docs multiple times in order to really understand the value add. Fetching/bandwidth/caching/persistence appear to be the biggest value adds for the tech. To me, these are the easiest things to do in a react/redux SPA. Fetching data isn't difficult, it's all the derived state data that is the hard part. Most of the "business" logic for me is formatting the data in a way that makes sense for whatever UI component I'm building, combining multiple streams of data for the UI, when to refresh data, optimistic UI, and offline support. To me the "declarative fetching" is not really that much a sell to me, because that piece of many projects of even intermediate size is relatively small.
I realize that apollo is trying to solve some of the optimistic UI, offline support, but it does not seem to fit super well atm. The other really big issue I have with apollo is like you stated: it's easy until you want to do something that it can't do. This is the worst position to be in because for 80% of the use-cases it works, but that other 20% where it doesn't fit at all can make a team's life a living hell.
Not to mention most of the time, for any large enough SPA, you will still need redux. If I still need redux then apollo solves nothing for me.
Of course the bigger issue, though, is the Redux Dev Tools/Thunks/Sagas/Observables ecosystem where you want a richer experience and/or already have existing code investments. Apollo has some equivalents to those (Apollo Dev Tools; resolvers can return promises, taking care of a lot of basic thunks/sagas), but it probably needs richer options for others. I know redux-observable is currently a big need for several of my applications and I don't currently know any way to approach that in Apollo other than maybe trying my luck with a custom "Link" and that API looks more intimidating than it probably is, enough so that I haven't had the investment need to approach it. (Then again, most of my applications need to be offline-first so GraphQL in general isn't a great fit for them, though Apollo looks like options might be possible eventually, if someone built a little more infrastructure [Links] for them.)
The next chapter of intrinsically rewarding work will involve simplifying the complex. Trimming the fat. Reinventing the Zen of Python. Most importantly, it will be glorious.
I don't work at Airbnb but I am the CEO of another startup. Our main website (https://rainway.com/) used three different technologies to build it. Why? Because it streamlined development. One team could focus on the code powering the blog, while another could write static pages that query an API. The build process brings it all together.
Tool such as Figma (we moved from Zeplin) do an amazing job for designing entire user interfaces and interactions and providing developers with all the needed materials to implement them. If you're building a homepage for a mom and pop shop, download WordPress. If you need to build something across multiple teams, you need tools to make it easier.
Would love to know more about that because looking at your website it's just plain HTML with JQuery.
Which ,in my opinion , is how you should be building landing pages.
Their last dotX release was also 9 months after.
You should also always have a look at roadmap (https://github.com/jquery/jquery/wiki/Roadmap) if you are trying to get a read on any project's longevity.
It is in a way a good thing that a mature library like jQuery isn't released too frequently because all the websites that use it works just a little bit snappier because jQuery is usually already available in cache in the browser from some other source and if not, then the nearest CDN probably has it.
On GitHub, it says that 3.3.1 was released January 20th, 2018:
Releases matter, as that is what gets the changes to end-users, and infrequent releases typically indicate a stagnant project.
What exactly is Static Content and are you implying AirBnB has it?
Does its mobile app also have Static Content.
One of them that I haven't been able to really solve till now is around cache invalidation / removing deleted items from cache.
How have you been finding Apollo around that area?
While the main technologies are pretty mature and well documented, gluing them together is often a lot trickier and left entirely to the user. If you need a way for Apollo to communicate with the database, you have to handle that yourself. Same thing for authentication and access control. File uploads also require a separate library.
I think the whole stack has a lot of promise, but it's going to have trouble gaining steam as long as it forces the user to think about these sort of things.
Something else I've noticed is that it shifts a lot of the work to the back-end while at the same time introducing new problems you never had with REST. With REST, you had endpoints with clearly defined inputs and outputs, which were easy to reason about and secure. This is not the case with GraphQL - because it is so expressive, it's hard to cover every possible use case, especially if you're doing the entire security and access control yourself. The other problem GraphQL has that REST does not is recursive queries - it's entirely possible to request something like this:
authors -> posts -> comments -> authors -> posts -> comments ...
Another thing I should mention - while Prisma and Apollo are pretty stable and well documented, various smaller libraries are often not. This is not an issue specific to GraphQL, but because it has a smaller ecosystem then say, React, you're a lot more likely to run into it.
Lastly - the pace of the development is bonkers. I've run into situations where an 8 month old post was already obsolete. Or major versions a year and a half apart.
The issues you mentioned are issues if you want one (or a few) framework or library to handle all of your web development needs, but rarely does one library work for web development at scale.
I guess my main qualm was that, because GraphQL does a relatively poor job of explaining why you should use it, people like me get the wrong idea about what kind of problems it solves and why you should use it instead of REST.
Ideally I'd like to be able to specify a unique identifier for a list through a directive on a query, and then specify the same identifier on the mutation, to have the result of the list in the mutation automatically replace the contents of that key in the cache (or append onto it for use cases like pagination, possibly in combination with another directive for sorting the combined list on the client-side?).
I was hoping to be able to do something like this using the @connection directive: https://www.apollographql.com/docs/react/features/pagination...
Unfortunately, when I tried this it looks like the @connection directive actually creates separate nested keys for mutations vs queries, so unfortunately this use case isn't possible yet. I'd love to hear how others are approaching this problem, especially those using other caching graphql clients like Relay. Or maybe I'm missing some better way to handle this in Apollo itself?
I was hoping to be able to expose the list of todos on the createTodo mutation response, and have Apollo update the cache automatically by querying for it in the mutation response, rather than writing the newly created item to the appropriate location in the cache manually. From my research into it so far, it looks like that's not currently possible, but I'd love to be wrong about that!
I LOVE how great Apollo has made my dev experience on the frontend. Cache invalidation is a hard problem though and I don't blame them for not tackling it until version 3.0.
The fact that they can already update data based off of fragments for all queries observing the fragment makes me believe they could extend a similar solution to invalidate a fragment and have subscribed queries remove that fragment.
Could you write a bit more about your learnings here ?
By my inspection it seems the only official Apollo Server implementation is in Node.js. By my understanding, you can replace an Apollo Server with any GraphQL server for example Scala's Sangria . So, your Apollo Client can talk to any GraphQL server, and Apollo Client is probably the best and most useful part of an Apollo stack anyways.
One good website for figuring out how to use different Apollo stacks is howtographql.com  (I am not affiliated with them, but I only like their service.) It has free tutorials for GraphQL servers including Elixir, Python, Node.js, Scala, Ruby, and Java. Howtographql also has a React and Apollo Client tutorial.
All Apollo Clients include:
- Native iOS with Swift
- Native Android with Java
- React Native
Maybe I am in the minority, but writing boilerplate code all the time just isn't something that interests me.
But mostly I'm curious how your point relates to this blog post. Maybe you can clarify? The stuff they're doing with Apollo here that potentially comes across as boilerplate also gets query batching, caching, refetching, etc. for free (in addition to the fancy stuff they also talk about like automatic mocks). It's not like they're "just sending a request." Maybe you can show me how you do all that in another framework without some setup/wrappers/etc.?
* Checks if you already have the resource to prevent sending an unnecessary request (can be forced)
* Retrieves the resource based on a convention for the URL
* Parses the JSON based on a convention
* Stores the resource in the "store" for use in the app
Batching is also very simple thanks to a plugin from Netflix:
The reason the Ember community doesn't talk about any of this stuff is because, from the developers perspective, nothing interesting happens, which is why developer productivity is high in Ember.
Which boilerplate do you take issue with? There are a couple imports you might find ugly, I suppose. Most likely though the blog post we're talking about isn't a good impression of what "just making a request" looks like. :)
Another thing to consider though, and something I've done in the past, is that you can actually embed the GraphQL resolver runtime completely on the client and easily adapt REST endpoints to it. It's pretty neat! So you can use all the benefits of Apollo without actually having a server that speaks GraphQL.
This Apollo project is one such way to do that: https://github.com/apollographql/apollo-link-rest (although not the strategy I used... I don't think Apollo even existed yet at the time)
The amount of stars would suggest the client approach is more popular. It's that consistent with your experience or am I not comparing apples to apples?
Approaches like rest-graphql on the other hand involve being able to control the API piece, thus being able to offer the frontend a real GraphQL endpoint to talk to. Adapting REST APIs to GraphQL like this is definitely much more common than the client-side use case (which is more of a last resort), so don't let the stars fool you. It's just that (in my experience) people tend to roll REST-to-GraphQL resolvers by hand rather than use a helper like rest-graphql, because it tends to not be too difficult. Writing GraphQL resolvers is one of the aspects of it I find most enjoyable, actually.
rest-graphql is for the other direction: graphql to rest. I had previously investigated it in order to switch rest backend to graphql and provide a thin conversion layer so that several different clients could convert at their leisure.
These two things are not at all comparable
Also if you're planning in the future to migrate the API to GraphQL, and this is just a first step, you'll be able to leave most of the frontend code you wrote the same and just switch out the "link" part. It's kinda like how most languages have generic database adapters where you don't need to think about whether you're talking to MySQL/Postgres/SQLite etc. most of the time.
I wonder what a newer dev's take would be? Would they find UI elements as values/expressions weird, or UI elements as strings weird?
We ended up switching to React by rewriting the Ember app in a few weeks and moving significantly faster as a team, gaining more flexibility & better abstractions (which we had to write, but it was not a problem with our engineers' quality). Since then, my whole organization has only written new UIs using React to my knowledge.
I personally would rather use Angular (latest) or React than use Ember again. I do appreciate some of the things Ember has brought to the ecosystem though like a robust CLI ala Rails, but I think the routing convention needs more flexibility and Ember Data needs some love to fix the ease of getting into some buggy situations.
For example: let's say when component A or component B get rendered, they need to fetch users in order to show something about them. You only want to make the request if either of these components is actually rendered on the screen. What if they both get rendered? Is something going to know to reuse the inflight request so you don't have two identical requests? Or is it just naively going to make unnecessary requests? What if some other thing already fetched the user list earlier, are either of them still going to make the request anyway? Or use the data that's already there?
Another example: you've got your result from the users endpoint. Now you fetch some user information from the "blog author" endpoint. Information about the same user is in both responses. What's the authoritative client-side source of information about that user now? Are you now going to potentially be displaying stale/conflicting info about that user in 2+ different places?
You can always just keep things simple if you want, and people in React and GraphQL land are happy to do that, too. But a lot of people are also focused on building complex applications. This is just to give you some perspective on the "why" here.
I really don't see what GraphQL has to do with this problem. What you're describing are issues that pop up with any external datastore. GraphQL is just a protocol. If there are clients that take care of that type of caching, that's an implementation detail. You can also have REST clients that do the same.
> Are you now going to potentially be displaying stale/conflicting info about that user in 2+ different places?
If you knew you needed info about the user as a blog author as well, you could've included that in the users request. Isn't that what you would've done in GraphQL anyway?
I'm not trying to detract from GraphQL's usefulness, but you can accomplish the same things in REST pretty easily too, especially if you control both server and client code.
IMO, GraphQL really shines when you're implementing clients for APIs that you don't already control. In that case, the flexibility is great. But if you're building both the APIs and the clients, REST works (and has worked) pretty easily.
For (2), the complexity is still there, just in the parent: it needs an entity store and to merge/invalidate when the same entity appears in multiple responses. So still not just a simple case of making some REST API calls.
Well, usually parent decides whether the children will be rendered or not.
React Hooks (currently in alpha for the next release of React) finally clean up that component lifecycle complexity I think you're referring to. Nader Dabit has a great post  on how to use React Hooks with GraphQL.
Probably the OP has seen two many of those posts and doesn't have the right frame of reference to know what to do, and Apollo looks like the same.
As they're using that mock data for testing, the data returned has to be deterministic, so I'm guessing they're either making up mock data manually by hard-coding that data in resolvers on the mock server implementation, or using some kind of fake-data generating library to generate that data dynamically based on graphql type in the schema, with a fixed seed so that the data doesn't change between runs.
I'm hoping to explore the latter approach a bit more as I feel it would be great for testing productivity to have mock data generated from the schema instead of having to manually write them for every new thing added.
We have a shared development environment with a persistent (and thus deterministic) dataset. So when one developer runs a query against that dataset, another developer could run the same query and get the same result.
The best thing about this, of course, is that if your Storybook data looks at listing 112358, you can also open that same listing in development and see the same result in the product. Very powerful.
My question was more about how that persistent dataset in your shared development environment is created though. As that dataset has to first exist for people to query against it.
Curious if that creation process is manual or automated somehow through inference on the types in the schema.
In my experience, if the system is working properly, there's not a lot of room for type-driven inference. We often get a design with very explicit data present, and we want to bring that data in rather than calling on Casual or
Although I feel the automated data generation approach still has value in that it can introduce some variance in the dataset to better represent real-world data, potentially uncovering issues in handling of edge cases in the design/implementation of the UI, that the original conveniently customized dataset that came with the design wouldn't. Though such an approach will likely also need to offer the ability for users to override/customize the generated data on a case by case basis in order to be useful for real world applications, so we'll probably end up with a bit of a hybrid approach at the end of the day regardless.
I work on some tech giant cyber security stuff, and it's obvious if you look at it that what my team designs solves a tech giant scale problem in the context of past engineering decisions made to solve other tech giant scale problems this company had. I probably wouldn't recommend my team's approaches to anyone who has less than ~5000 engineers.
From this experience I've adopted a heuristic: first consider the scale the system was built to address. If you are not in the ballpark of that scale, give it a long hard look before you adopted it.