Hacker News new | comments | ask | show | jobs | submit login
Designing very large JavaScript applications (medium.com)
434 points by kawera 10 months ago | hide | past | web | favorite | 154 comments



I love Malte's take on what makes a senior engineer with regards to API design:

> The way I would talk about myself as a senior engineer is that I’d say “I know how I would solve the problem” and because I know how I would solve it I could also teach someone else to do it. And my theory is that the next level is that I can say about myself “I know how others would solve the problem”. Let’s make that a bit more concrete. You make that sentence: “I can anticipate how the API choices that I’m making, or the abstractions that I’m introducing into a project, how they impact how other people would solve a problem.”


I think "How confusing is this going to be to the one consuming this" (assuming you're not writing the front end as well) is the more important question to ask


This way of thinking was my main motivation behind Grip [0]. Use Readme-driven development and "play" with the API--whether it's in the form of a CLI, a library, or web API-- through usage examples.

I've found this to be a wonderful way to get into the heads of dev users before you approach them directly.

[0]: https://github.com/joeyespo/grip


And the one most rarely asking in my experience too


In my opinion, asking this question is the key benefit of test-driven development. You get to be the consumer, write down your expectations, and then work backwards from yours-as-consumer expectations.


I find the opposite tends to happen in Java. In the pursuit of testability, people make libraries much harder to use and understand.


Writing good tests is a skill that has to be developed, just like anything else.


It should be, but too often people write bad tests that test whst they wrote and not what it’s supposed to do.

Writing good tests is a skill and most developers don’t seem to be as skilled in it as they think.

The same concepts power much of Agile too though: XP & Scrum both complement this.


Isn't that basically what he said? :)


> I can anticipate how the API choices that I’m making, or the abstractions that I’m introducing into a project, how they impact how other people would solve a problem.

I feel like if you take "API" out of this sentence, this is really the benchmark for any senior developer.


yep.. a very refreshing descriptor of a senior level dev. It is also a very intuitive notion of the behavior of a senior level dev, even if not frequently articulated as such. I'm working with a fellow dev right now, where this notion of empathy is lacking.


Yes I loved it too, and it is super relevant. I've had my fair share of team swaps, both from backend to frontend, and it is always very interesting to see how engineers who have been on a specific team/role for a while tend to lack empathy and forget that other engineers who end up using their APIs don't share the same context and knowledge (or worst, sometimes it just comes from laziness).


I've always disliked the "BIG TEXT CHOPPED-OUT BUT YOU'LL READ IT BELOW AGAIN IN JUST A SECOND" paradigm in magazines and newspapers. Like, I'm reading the article anyway, why suddenly spit some of the words in my face super big? I'm guessing it's for skimmers, catch their eye and bring them into an interesting paragraph that's further down from the top.

But this article does it at the beginning... we read the sentence "Hello, I used to build very large JavaScript applications" three times. First as an image, then as a label for the image... and then as the first sentence of the article. Why lol


Recently, I'm annoyed at coming into a hackernews thread and the top thing is some completely irrelevant thing.


meta


> Why lol

Because, even closer to the beginning, in the real first sentence of the article (by which I mean, the first sentence after the title), it says, "This is a mildly edited transcript of my JSConf Australia talk."

This is not an example of the "big text chopped out" paradigm from magazines, it's an example of the, "each slide contains a brief of the most important part of what you'll be saying while that slide is being displayed" paradigm from conference lectures.


The problem is those slides hinder the readability of the actual article.

For a reader, you get into the rhythm of the writer's voice, only to be interrupted by a giant image of text every other paragraph. It's a battle between just skimming the slides, or trying to read the content.

If the author had just swapped out the "loud" text images for subtle headers, the article would actually flow.


And therein is also seen a limitation of using someone else’s blog engine; on large (wide) displays, this would be better done with the slides on one side, and the speech on the other, appropriately lined up. But you can’t do that when you’re using Medium for the post.


I've been thinking the exact same thing for years and couldn't put it into words.

The most annoying thing is I can never be sure whether the BIG TEXT CHOPPED-OUT is actually going to be repeated again later in the article. Otherwise I'd just happily ignore those big sections every single time. As it is now, I have to read it and then get mad when moments later I find myself reading the exact same thing. I wish there was a way to stop this.

edit: Thanks to those commenters who mentioned https://en.wikipedia.org/wiki/Pull_quote - It's nice to be able to properly refer to that which I despise.


The images are labeled for accessibility reasons; they're used as fallback descriptions for figures and images. The slides are wrapped with `<figure>` tags, and the captions with `<figcaption>` tags. They're so that screen readers can interpret the figure, which is why the text is often redundant.


...But images have the alt attribute when you want to help screen readers without cluttering things up for people who read the image.


I guess Medium should offer a visually-hidden style then.


These look like they where adapted from the talk linked bellow, and each slide image has a description probably for accessibility (screen readers).

https://www.youtube.com/watch?v=ZZmUwXEiPm4


Those appear to be be slides from the talk that's the original form of this article.

This doesn't necessarily justify the first slide's presence at the very beginning, but it might explain it.


https://en.wikipedia.org/wiki/Pull_quote

I think they're more useful in a newspaper because they can catch the eye as the reader scans the page. On the web they are fairly useless.


> I've always disliked the "BIG TEXT CHOPPED-OUT BUT YOU'LL READ IT BELOW AGAIN IN JUST A SECOND" paradigm in magazines and newspapers.

Just came here to say you're not alone; I've always hated this too, and I really wish it hadn't gotten adopted on the web


It's sort of like reality TV teasers but in printed form. Right before the commercial break they show a clip of something you'll see "when we return."


That text (here and in magazines, newspapers) isn't for you. It's for the casual browser who isn't reading the article yet. It's trying to suck you in and/or give you teensy bite of info even if you don't read the article. https://en.wikipedia.org/wiki/Pull_quote

Of course this article format is kinda pushing the limits I agree.


Which is why it doesn't have any place being used on a web page :/.


Why not?

I admit to often reading things that are far outside my wheelhouse, or not always topics I assume I will enjoy. If I start one of these and find myself loosing interest, these quotes and call outs can draw me further in or push me right out.


This is a talk so it makes sense to put the text on the slide and also talk about it.

The image caption, I took it as medium's implementation of although text for cases the images don't load.

In this case they just happened to be similar.


It's like putting a cover on a book that is related to the matter in the book, to keep you reading because you may start getting bored, so they pull out a sound byte. Pretty obvious.


Yeah the picture labels in that article looked machine generated--generc and redundant unless vision impaired.


I always assumed it was to bulk the article out to fit the printed page.


More like designing very long Medium posts.

Is there any substance to this "we invented 'enhance', a reverse dependency pattern" thing or is it just old wine in new, javascripty bottles?


This is just basic dependency injection.

I agree it is mildly amusing to watch Javascript devs excitedly discover these principles but it does show that these principles are universal. Virtually everything in this talk applies to building very large Java and C++ applications. (And here I don't mean 10mb... rather global systems made up of dozens of services and hundreds of components.) All of this is about modularity -- of configuration, of state, of logic, and of concerns.


I think what he's pushing is slightly different from dependency injection.

For one, one of the hallmarks of DI is that you should have a single, central location where all your dependencies are assembled. Whereas one of the slides in the talk is, "Avoid central configuration at all cost." His concrete example is with CSS files, but I could definitely a similar principal applying to your "composition.json" file in a large DI application.

It seems more to me like he's talking about the opposite of DI. It's a possible solution to how you decouple modules from their dependencies. But in DI, you accomplish it by having a central orchestrator inject the dependencies into your modules. In this "enhance" mechanism, by contrast, you reject that central orchestration, and instead have dependencies independently injecting themselves into the modules.


> one of the hallmarks of DI is that you should have a single, central location where all your dependencies are assembled.

This is very interesting to me mainly because it's the first time I've heard it. In Martin Fowler's bliki article [0] he discusses having an assembler, but for me that's just an abstract factory pattern (perhaps I'm wrong). If you don't have abstract dependencies, then you don't need it. Do you have some sources which discuss this? I don't really see the advantage of using a kind of repository for dependencies.

[0] - https://martinfowler.com/articles/injection.html


There isn't any one spot where he explicitly states the idea that wire-up is handled centrally, but the idea is threaded throughout the article.

It's discussed a bit more explicitly by Mark Seemann here: http://blog.ploeh.dk/2011/07/28/CompositionRoot/

As for how this fits with abstract factories, in DI they exist to handle a specific problem: sometimes (but not always), actual instance construction has to be delayed until the last minute. But you still want things to be loosely coupled, and do the wire-up at the same place where you handle all the other object composition. So you do that by declaring an abstract factory, and then generating its concrete implementation at wire-up time. That way the details of the composition are still being injected, even if they're being executed at the site where they'll be consumed.


Enhance kind of looks like google hype. Essentially it’s a way of dynamic dependency discovery. Rather than explicitly adding imports, you just crawl through a directory convention and discover things to bundle and connect.

I liked the talk, dependency management and code splitting are big problems in large apps, but there was a lot of hand wavy “we at google invented this, so it must be great” reasoning.


>For one, one of the hallmarks of DI is that you should have a single, central location where all your dependencies are assembled

While this is true, if you're doing your composition root right in a large application, then you're assembling most of your dependency graph using conventions, and assembling only the most unique things by hand.


While it may be a hallmark of DI it’s not necessary. Spring is perhaps the most famous DI framework of them all and is frequently used by “classpath scanning” relevant components to wire in


There was denial for a long time that IOC containers and dependency injection added zero value due to the module system and imports. There are a few nice options out there now, like inversify and typedi. I think TypeScript has been attracted a different crowd with a different taste for whats good. I'm pretty happy about that.

Ruby went through this exact same thing. I think Sandi Metz's book was a turning point for the community.


How do module systems make IOC and DI obsolete? What's the argument?


I read that differently, like:

Some people thought ioc and di wouldn't be needed with module systems but we've now mostly realized...

Typescript at least is used with injection in Angular (and it is really nice.)


I meant why people thought it wouldn't be needed. For me they are different things so trying to understand the argument.


Well one way you avoid the need for DI is that if you are only using DI to allow you to swap out real collaborators for fakes in your tests ... to instead simply mock module imports in your tests, (to mock those collaborators). This is common with Jest in the React world. And it's easy with a good module system.

I'm building 2 large apps, with no DI, and using this approach (Jest) and I never want to use DI again. I hated the DI system in Angular 2+


Maybe I misunderstand DI when I want to have a "stateful" module, let's say a class that I export that requires some config input (think Database). I initiate it in index.js or something and then just pass it in the constructor in every other class that needs to use it. What is this called? If it's DI, how do I do that with modules only?

The reason I'm asking is because sometimes I want that dependency 10 steps deep and if the 10th class wants to use it I need to pass it through all other 10 classes. Creates a lot of bloat.


Yes, that is DI too, you can think of it something like 'injecting a shared singleton service', and yes the passing into a constructor is the basic approach you might call 'manual DI'.

And yep modules can let you avoid the need for that, you can share singletons by exporting an instance from a module, and then if you import that module from 2 other modules, both will get the same instance [1].

Though tbh that still feels a bit gross to me :) There are other solutions (like the context API [2] in React for passing some data/instance to multiple components at different levels in your 'component tree').

[1] https://k94n.com/es6-modules-single-instance-pattern [2] https://reactjs.org/docs/context.html


Thanks a lot! Learned a bunch from this single comment. :)


No probs :)

DI is something I battled to understand for a while!


Is there a single place to learn about all of the common software engineering principles, as opposed to rediscovering universal principles over and over again?


Martin Fowler's site is a good start, especially this collection: https://martinfowler.com/eaaCatalog/ . And then read his books...


Short answer: No, but your best bet might be to read old, substantial books, and find other work by the kind of people who read and wrote those books before they were old.

Longer answer: Basically, you need to get away from the world of front-end web development, and from the echo chamber blog posts and conference talks by people who have only ever worked in that area. No-one is writing anything large, high-performance, high-reliability or long-lived there yet, so you need to look for experience from people who have had to do those things in other contexts instead.

Classic books like Code Complete and The Pragmatic Programmer, early "serious OO" books like Design Patterns, and practical advice books of a similar generation in the Effective <programming language> style all contain a wealth of knowledge and insightful commentary. Of course some of the technical details are quite dated today and some of the specific techniques discussed might no longer be considered good practice a decade or two later, but many of the underlying principles and the discussions around them are as relevant as ever. A bit more recently, there were interesting discussions about a broad range of programming issues in Beautiful Code and the related titles, and there have been some interesting case studies based on open source software too.

If you want to read some shorter pieces online, I recommend finding a few authors who work in fields like games or embedded software, where there are often significant performance and reliability constraints to deal with, not to mention hard deadlines that force difficult decisions and compromises. There are some healthy doses of reality in there that you won't find in less demanding environments.

Enterprise applications, while often considered bland, can also be large and some of the longest-lived software we develop, and much has been written about organising and maintaining them, including real-world pressures like changing operating environments and large development teams whose members come and go. Perhaps a little ironically, this now includes a fair bit of back-end web development. Some of the writing around this is well worth a read as well, but beware that this part of the industry is plagued by consultants who talk a good talk but don't have much of a track record or other evidence to back up their advice. Approach with caution, particularly anyone who uses words like "agile", "lean", "craftsmanship" and anything else on the buzzword scale.


In a way it's not so much rediscovering the principles themselves as discovering something about the ways that "pain points" in a language have changed over time... Such that the techniques' benefits begin to visibly outweigh their cost.


In all those books from the many "What books would you recommend every programmer read?" questions on HN, Stack Overflow, and every other forum developers gather.

EDIT: You said single place, sorry.. Safari books online, Codeproject, and MSDN are potential candidates.


The software engineering body of knowledge approaches this: https://www.computer.org/web/swebok

I can’t give a personal recommendation as I’ve never done more than skim but I know some who hold it in high regard.


Grady booch was working on https://handbookofsoftwarearchitecture.com/ which looked to be a really good resource, but I'm not sure it's getting worked on anymore


Just stop being a frontend developer. Read some books about object-oriented software. Everything will come naturally.


Agreed again. We've been debating the merits of jazz vs. orchestra models for larger systems since at least when OO started getting popular, say back in the early '90s, and probably much longer by one name or another. This is basic software architecture stuff, though sadly in today's framework-heavy front-end world, a lot of that basic knowledge isn't as widespread as it usefully could be.


I would call it the plugin pattern. Where plug-ins enhance the main program.


“Enhance” seems to be exactly equivalent to what would traditionally be called “plugins”.


If that’s what you got out of it, you may have missed the message.

To me, the “we invented enhance” thing was just an illustration. I read the article as a tale of how choosing certain technologies can affect the way that applications are designed / developed / maintained, and that choosing the right tooling and patterns - in anticipation of how they will be understood and used - is an essential skill to develop for a senior-level developer.


It is tragic that the world standardized on Javascript. They are just now discovering some of the module issues that Prof. Wirth solved in his great Modula-2 langauge in 1974. His system allowed separate compilation, and a way to detect when the interface API changed so that it could be cross-checked. I used Modula-2 to great effect on very large projects, and am greatly relieved that the Modula-2 module system has more or less made it into JavaScript.


You may appreciate Kevlin Henney's talk, "Procedural Programming - It's back? It never went away."[1]

It's a tour of how all the best discoveries of the past decade or two really date back to the 1960s.

[1]https://www.youtube.com/watch?v=otAcmD6XEEE https://www.youtube.com/watch?v=otAcmD6XEEE


Same with PHP.

It was easy so many people could start their dev career with it.

But one day even PHP matured, so give JS a few years.


Don’t worry, they’ll discover types eventually.


Aren't most large-scale javascript applications using typescript or flow these days? I definitely wouldn't try building a large scale javascript codebase without either of those anymore.


Frontend developers at our company vetoed TypeScript for being too difficult to comprehend for JS devs.


The downside of Typescript is the constant friction from bolting a type system on top of a dynamic ecosystem.

It's like Babel and Webpack where you incur time spent on debugging your own tooling and getting these systems to collaborate. By the time I put in the effort, I decided it was simpler to go all in and use a compile-to-JS language instead of something that tried to be Javascript with types.


This is 100% spot-on. With our web development product, Elevate Web Builder, the furthest that we went in the compiler with "giving in" to the dynamic typing in the target JS was a variant type. The rest has to be given an external interface declaration (classes, functions, and variables) that the compiler uses to type-check all calls to external JS or the built-in interfaces in the browser APIs. Other products have tried to allow for in-lining JS or simply include-ing it via special declarations, but that really handicaps what the compiler can do in terms of statically analyzing your codebase, which kind of defeats the whole purpose of using types in the first place.


It's quite possible that people feel they lack the technical baggage to cope with typescript's type system (typescript = javascript + types). However, I would argue that the same technical baggage is a prerequisite to be able to architect large-scale javascript front-ends. So, IMHO, in the context of large scale apps typescript or flow are a no-brainer.

The fact that it requires a build step turns out not to matter because everyone uses a babel / webpack pipeline anyway, so all that same complexity is there for regular javascript as well.


Front-end dev usually have a tight feedback loop. You don't want to wait longer then necessary to see your changes on the screen. You only run the build pipeline when you are about to push your updates into production.


I like TS a lot, but it's a different language so you need to transpile it. When webpack cached recompilation already takes up more than 30sec (which is the case for most of our routes), I will do everything in my power to keep it away from the project.


You can use JSdoc and get all the advantages from TypeScript/Flow and don't have to change your syntax or add complicated build steps and source maps.


uhhh no, jsdoc doesn't fail at compile time for type errors, and doesn't have ide support.


I haven't tried it myself but I'm pretty sure both TypeScript and Flow has support for JSdoc, so while you don't have to transpile your code you can still use it as a linter. And you should also get the auto-complete and re-factory goodies. Also note that TypeScript and flow can do inference, eg they can know the type without you annotating it. Microsoft will not fully support inference and doctype though as it's their strategy to Embrace and extend JavaScript, but they have tried for many years and not succeeded. Google also tried with Dart, which is a much better language but too complicated/hard to learn compared to JS. It's very nice that they make tools like static analysis for JavaScript, but they don't have to turn it into another language! For example instead of making the programmer write annotations everywhere, show a hint where inference fail and where interface is needed. Ohh and don't get me started on "correctness", even static type systems like in Java can't prove "correctness" and has to rely on runtime checks. And it's not like the type system will detect all bugs, just the obvious ones. And JavaScript is far from static, it can change many times during runtime. Instead of bolting on a type system, embrace the dynamic nature of JavaScript, and write dumb naive code that a five year old can understand. Use abstractions, use proper naming (even name the anonymous functions), comment where needed. Check function parameters and throw errors! And write tests!


It does have IDE support. VSCode, Visual Studio and WebStorm all support using jsdoc for autocomplete/intellisense.

So, the only thing it doesn't have is compile-time failures and I'm fine with that because I don't really make mistakes that often so I'm not doing a bunch of extra work to identify types for the compiler so it can help me find mistakes.


> I don't really make mistakes that often

Famous last words. Everyone makes mistakes. All software has bugs!

But let's grant that you don't for the sake of argument. Is the rest of your team similarly infallible? Even if they are when writing code, will they be able to perfectly parse code they didn't write? What about when you're refactoring and you want to make sure you didn't forget to update any place a function is called? What about when you come back to the code in six months and don't remember what it does? What about when the code changes but you forget to update the JSDoc?

If you're writing something quick and don't have to work with people, sure, I'll buy that types are too much overhead. But when you start doing things at scale, they're a powerful tool for checking and documenting your code.


I’m with you. I actually prefer type systems when the language has them, however I find that languages that transpile to JS come with their own set of problems.

The jsdoc for my code is right next to the code so for all intents and purposes, it is the code. Intellisense for vars is always showing itself, to remind you if the jsdoc type is wrong.

It’s just a trade off, like many things in engineering. Millions of people have been coding in just JavaScript pretty well so far, so I wouldn’t limit the question to just my team.


> I don't really make mistakes that often

If you're not making regular mistakes, I suggest you seek more challenging work.


Thank you. I should say that I don’t make that many mistakes when I’m typing in the code.

When I’m designing it, I make plenty of mistakes. When I’m picking out libraries to use, I make plenty of mistakes there too. Most of my mistakes come from architecting things incorrectly like recently, when I chose to make a huge app into an SPA instead of classic web app which would’ve been simpler and would’ve performed just fine.


do you think most react projects are in typescript?


The article refers many times to the Closure JS compiler, which has been open-source for around a decade now.


The module system in nodejs is far superior as it lets you treat modules like variables, pass them around etc. The root of all complexity is that different code parts entangles. With local requre's all code parts can truly separate which makes it easy reuse and delete code. And you don't have to hunt a function/variable across the whole code base to figure out where and what it's used for.

And it also lazy loads the module. ES6 modules might be a 44 years old standard, but NodeJS modules are better !

For use on the web the web server could grep require from the source and push modules to the browser, so when a module is required it will be loaded from cache. The browser could even pre-parse the module to speed up run-time for when it's required.


The fact that they can't be statically analyzed is a huge downside. It's a trade-off.


>For use on the web the web server could grep require from the source and push modules to the browser

Nope. The server can’t figure out which modules are needed unless it runs the program:

    if (isPrime(366226717)) {
        require(“hugeModule”)
    }


No need to make sense of the code, just run a regex to find all require's. Could also use a package.json


   require(calculateTenthMersennePrime().toString() + “.js”)


can you regex this:

var _0xdde4=["\x6C\x65\x66\x74\x70\x61\x64"];require(_0xdde4[0])


worst case scenario is that the browser has to request it from the server. This is bad because the execution would have to stop and wait, but for most cases it would just be for a few milliseconds. And the developer should take that into account and maybe prep the cache for possible paths.



There should be a feature that matches the url of your link against posts within the last 7 days, and alerts you to possible reposts prior to submitting.


There is.


I'm really appreciative of the mention of the inverted import.

Inverting the flow really lets components "register" themselves and thus be fully contained rather than be "required" by another component. It can be powerful but it causes the problem that you never really know what all is registered to another component.

Also, yeah, code-splitting isn't easy!


Reminds me a lot of the League Client architecture that Riot wrote about:

https://engineering.riotgames.com/news/under-hood-league-cli...

Every component of the client is independently built, tested, versioned, and registers itself with the main process on startup.

I think didn't grok the nature of "register" vs "require" and the subtle implications of containment vs requirement until now.


A simple way to look at this is when you build the application it’ll be from the top down, but when people have to maintain it, they work from the bottom up. You shouldn’t have to know that there is a central registry for a file to work on a file, it should be clear by reading the file.


I have a talk about these ideas in the Context of a React Application last year at React.js Conf - https://youtu.be/395ou6k6C6k. It might be worth a watch if you’re interested in how to build applications like this on your own.


Does anyone knows if this can be done in Vue.js?


thanks! I appreciate it! :)


I agree with much of what is written here, but the emphasis on code splitting seems like it risks hitting the wrong target. My first question is usually what caused even relatively simple web apps to need multi-megabytes of "optimised" JS code in the first place.

All too often, the answer is not the application code itself but rather all the direct and transitive dependencies, where the developers yarn-added first and didn't ask questions later. And while obviously that offers some advantages in terms of reusing code, it does also have a very high price in terms of bloating our deliverables, particularly given the still relatively immature tools for optimising JS compared to traditional compiled languages.

Maybe we should be looking at that problem first, before we start assuming that any large JS application will necessarily produce some huge monster outputs that need to rely on code splitting and all the overheads that brings just to get acceptable performance?


I think the author still has a point when talking about "very" large applications. His example of all the Google widgets is a good one. Even if you were to hand-craft all of them with optimised code (which I imagine is what they do at Google), it would still likely be too much code for a single package to load in a reasonable amount of time.

But I agree completely with you as far as other projects go. If you drop the "very", it should be entirely possible to build a large project without the need for code splitting. Almost nobody is doing anything to the scale of the example talked about in this article, and almost everybody is using code splitting.

In my experience, there's two root causes for this.

Cause #1 is that barely any front-end developers understand or consider the cost of abstractions. Something like code-splitting is commonly seen as "free" because it takes a couple of lines to implement. The permanently increased complexity of the software and all that extra brain power required to grok it over the development lifetime is never taken into account. At least half of the devs I know are happy just banging in code-splitting at the start of a project with zero thought and I guarantee you they'd read this article and not understand the "sync -> async" example given to explain the downsides of code-splitting.

Cause #2 is that devs are too eager to add new dependencies. By definition a popular library is going to be extremely bloated relative to your use case. When you 'yarn add react-redux-awesome-date-picker', what you're usually adding is an overly generic solution with a large amount of customisability and functionality that's not relevant to your use case.

My go-to example from my own experience is with rc-collapse. rc-collapse is a container that animates to and from 0 height. It's like 200 lines and has 4 dependencies (god knows how many KB that adds). I've been using my own version that's 50 lines and 0 dependencies in production code for years and never run into a problem with it. I'm sure rc-collapse works around some fancy edge cases or supports older browsers or something, but I'm almost positive the extra weight isn't necessary in 90% of the projects that use it.

This is kind of tangential, but another real problem with this mentality is that by implementing my own collapsible container, I learnt some important lessons about React, the DOM, browsers etc. Devs that play npm lego aren't generally going to get that extra knowledge, which will cost time in the long run.


The framework he is talking about is trash compared to modern JavaScript UI frameworks.

There are very few established patterns and each team seems to make up their own rules. Some of the worst code I've ever seen was a result of this framework.

The code-splitting is probably the best feature and worked great when this was released, but smart lazy-loading of bundles is much easier in 2018.

The most frustrating thing is that the framework developers consistently point to benchmarks created over 3 years ago to show how great it is.


This article isn't about the framework though. It's incidental.

This seems a very out of place comment.


I get what you're saying and maybe I'm being harsh, but the whole backdrop of the article is this supposedly amazing framework that was built. It undermines a lot of his points that it is poorly used.


The apps listed in the talk (Photos, Sites, Plus, Drive, Play) all seem to have gotten much better lately? Are you actually sure you are talking about the same framework as the author? Google has so many it can be confusing.

If you'd be talking about the closure library and dreaded goog.ui.Component, I'd agree but that isn't it.


So what framework is it?


The article mentions that it's an in-house proprietary framework used in some live Google products, such as Drive, Photos, Search, and Play.


It's interesting how in 2018 web development === react. React this, react that.

Anyway, I think beginner empathy is when you anticipate the usage of your API, or modules. Pro empathy is when you talk about it and share your feelings and needs and then listen to those other users' feelings and needs.

Fractured pluggable routing may be a good idea, but you know, at a certain point a centralized route definition may be a good idea too. There's no one solution fits all strategy.

Major routes === modules (oh boy, enhance). We're all MVC (with islands of goodness) again. Frontend is the new backend, but it's fronted too. Doing frontend is hard.

Dependency injection. Malte talks about the google search result page as an example for ultra complexity - Angular tries to sell itself for very high complexity ("enterprise") projects, yet I would eat my hat if Google plus or even the search result page would be better in Angular. The frontend landscape is fractured and everyone tries to sell you his or her snake oil.

Base bundles: yes, they suck. That's why common chunk creation is delegated to the bundler (like webpack) and then you believe that if a datepicker used more than three times it's okay to load it all the time. Or you can be a smartass and do require-ensure like logic all yourself (maybe it's not your fault, maybe just a very clever manager made you do it). And then you will feel very smart and good and after a year or so you realize how it was stupid and then someone comes along and relegates it back to the bundler.

Large applications in javascript are like Frankenstein's monster, but it's better to have a monster than a pile of dead flesh jolted by electricity with each and every release. My two cents.


> It's interesting how in 2018 web development === react. React this, react that.

Its quite popular for sure, but after just doing a job search for about 6 months, I came across a surprising number of firms using ng, ember, and even vue in a couple places.

modern ruby on rails with ujs is still hugely popular


You're lucky. I've been through a jub-hunt cycle, my experiences here are: lots of legacy stuff (mostly angular1), tons of react, some nu-angular and one vue (which is going to be my next job, not just because I love vue, but also because they understood the tech stuff I had been talking about, unlike many others).


This was a very interesting article!

I'd currently consider myself a mid-level engineer, and I have recently been put in a position where I must lead a team of many devs and incorporate all their code. I did understand about half of this article, but it also indicates that there's a lot left for me to learn.

I'm curious what kind of blogs or online courses or books I should search for if I wanted to better develop this skill within myself, and to design and architect these sorts of large applications with multiple contributors?


My company also rolled our own framework to handle the case of 'very large web applications'.

In our case, every component is a file (which is becoming more typical) -- but it differs in that every file dependency is automatically resolved and downloaded in a lazy, on-demand fashion. With HTTP 2.0, this means you can update a single file without a build process -- and everyone receives the update as a minimal change.

Also, all paths are relative or built dynamically using something like C #define, so it's easy to refactor by moving entire directories about.


Don't the popular web frameworks handle lazy loading things?


Most of the popular frameworks generate a single JavaScript bundle that comprises an entire application. Some people break these bundles into multiple parts manually in a process called 'code-splitting' in order to do a really chunky kind of lazy load.


Code splitting via webpack is not a manual process.

Does your hand rolled framework handle minimization and obfuscation? Probably not because any changes would require a rebuild.

Are your assets cached? If they are, I'm curious how you handle the cache busting when a single file gets updated.


With HTTP/2, you can send files down independently, without creating a large bundle, so each file can be cached. You couldn’t obfuscate function names across files though unless you made the process deterministic based on module name.


But you need some cache-busting functionality anyways (otherwise how does the client know that file "something.js" has changed?


Etags could be used to cause the browser to revalidate the contents without too much overhead since the connection is already establishd, but good point.


I though the article would have just one line saying "Use Elm"


After 1 year using Elm, we've had a lot of great and not-so-great experiences. For a large app I still think choosing Elm was overall a better choice than javascript, but it's not without downsides.

Recently a new dev joined the front-end team. He had js background and no previous functional programming experience. In less than one month he was already shipping production code with confidence on our ~40kloc app. That goes to say that 1) I don't think lack of Elm programmers is a problem if the company is willing to train its employees and 2) I'm sure the front-end lead and the new dev would have a much harder time if we were using javascript.

Surely this is our experience. YMMV.


The killer feature of Elm is something I discovered a year after taking a break from Elm: I revisited a production Elm application I hadn't touched in a year, approaching it almost regretfully with "why did I have to use Elm?", certain I'd have no idea what was going on after such a long absence.

What actually happened was that I was immediately productive.

Without having to credentialize in the code, I was able to stub out new features and push an update. An infrequent experience for me in other languages. I had only a figment of an idea of the code I had to write, but I got started and the compiler helped me the rest of the way.

I still don't understand how to confidently organize an Elm app. But that describes my relationship with every front-end Javascript framework as well. I regularly choose poorly between component vs upstream state in React. But what I can do is refactor Elm code at a level that would be downright expensive in Javascript.

As far as the sour graping going on in some nooks of the Elm ecosystem, I'm reminded of this post by Rich Hickey: https://www.reddit.com/r/Clojure/comments/73yznc/on_whose_au...


Sadly Elm development is not going so well. There are open pull requests without a response for 2 years. Source: https://www.reddit.com/r/elm/comments/81bo14/do_we_need_to_m...


Elm has always been that way. Trying to say that this is new behaviour and a sign of it “not going well” is a bit off-base I think.

Sure, the particular behaviour and choices of the core team are one of be reasons I don’t use it in production. But they don’t try and hide it, and it’s certainly not new!


That's too bad. I didn't know that this isn't new because I recently was looking more closely at front-end web development.


I really dislike these frivolous images that can be easily represented by blockquotes. It just wastes data.


what is the metric to consider a JS app as "VERY large" as the author puts it? If SLOC is used as a proxy then how code from libraries (i.e. code that the team didn't write) are considered? Recently when I generated a Ruby-on-Rails app with webpack enabled, it downloaded 8148 JS files in the node_modules folder. 675K SLOC. I thought it was absolute madness.


The size of node_modules probably isn't a great metric to use if it includes development tools and libraries (and all their transitive dependencies) as well as any libraries you're bundling into the files your user will download and run.


Well, I still have to have those on my computer and in my Docker container. And from a trust, auditing and security perspective it’s pretty horrible, even if multi-stage builds can help

With Buble, Flow, and Roll-up, I can get a decent modern environment with as few dependencies as possible. Upsides? Much faster, too. Less disk space. Docker containers build faster, which is better for CI/CD. And it’s easier to understand the codebase and dependency tree, including development tools.


Just to clarify, I'm not disagreeing with you that the current dependency-heavy JS world is madness or that the bloat during development activities is unhelpful. I'm just saying that having a large/complicated development environment is probably a separate issue to having a large application for most other practical purposes, but looking at the size of node_modules will often conflate the two.


Wow, this must be pretty common. I compiled Bootstrap recently and it download 250 MiB worth of node modules which consisted of 969 dependencies, and 11673 JS files.


which modern JS framework offers component level lazy loading? Angular offers only route based lazy loading, which as is limited as the article pointed out.


There are different ways to do it based on the transpiler you use. With Webpack and dynamic imports, you could create a custom loader around the promises returned by the Webpack generation.

For React, you can use React Loadable (https://github.com/jamiebuilds/react-loadable) that provides a Higher Order Component and Server Side rendering.


Pretty much trivial to implement in react.

    class Loader extends React.Component {
      state = {
        Component: null
      };
    
      componentWillMount() {
        import('./Component').then(Component => {
          this.setState({ Component });
        });
      }

      render() {
        const { Component } = this.state;
        if (!Component) {
          return <div>Loading...</div>;
        } else {
          return <Component/>;
        };
      }
    }
Might want to add a few lines of error handling for production use, but that is pretty much all you need.


Like the article (presentation) said it's not the best idea to do this with hundreds of components (I think he mentions latency) - in fact you "somehow" have to (should?) bundle things together and then you are back to square one or at least have to hink about pluggability/configuration again.


Obviously you don't want to use it for every component, you put it in key places where you want lazy loading to happen.


It isn't well documented yet but this is possible in Angular (2+). Here's a good tutorial from someone who figured it out.

https://blog.angularindepth.com/dynamically-loading-componen...




My company's framework lazy loads every component automatically by default (you can override this behavior to batches as well).


Vue.js has such functionality (when used with webpack & vue-router)



Does Google really use React? I got that impression from skimming the article. That'd be interesting given they develop at least two competing tech like polymer and angular.


The talk mentions they use an in-house framework, but react was probably substituted in to make it more approachable for the audience.


You can't really say "does Google use" because Google isn't monolith.

Big companies usually keep teams cross-functional, leaving teams with the discretion to use the correct tool for the job/team.


One of those teams can use React and then Google would use React.

Nobody cares how much of Google uses React...


Isnt React actually open source? https://github.com/facebook/react


You were misled by the slide image interrupting the flow of the text:

So, I build this JavaScript framework at Google. It is used by Photos, Sites, Plus, Drive, Play, the search engine, all these sites. Some of them are pretty large, you might have used a few of them.

[Slide]

This Javascript framework is not open source.


Oh, thanks for the clarification.


I found it a tad bit confusing as well, thanks for asking.


Yup, Facebook switched it from a BSD + Patents license to MIT in September of 2017.

https://github.com/facebook/react/commit/b765fb25ebc6e53bb8d...


> This Javascript framework is not open source. The reason it is not open source is that it kind of came out at the same time as React and I was like “Does the world really need another JS framework to choose from?”. Google already has a few of those–Angular and Polymer–and felt like another one would confuse people, so I just thought we’d just keep it to ourselves.

Given the current state of affairs, an open-sourcing of this framework would be welcome.


Do you have some specific reason for this (genuinely curious)?

Think we've put ourselves in a corner, so to speak, with the Angular, react, preact, vue, ember, etc etc options and this would offer something compelling from the authors examples?


The article discusses one of the major problems: "React component statically depend on their children." There are workarounds, sure, but there are all kinds of pitfalls with the dynamic workarounds, to the extent that I find myself breaking the react model regularly in applications (e.g. utilizing state outside of redux and forceUpdate)




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: