Hacker News new | past | comments | ask | show | jobs | submit login
Buck – A build system developed and used by Facebook (buckbuild.com)
272 points by Nimsical on Mar 14, 2017 | hide | past | favorite | 127 comments

I don't work in a shop where performance/speed is important, but I am looking for other ways to do things I would do in Make but...not in Make.

For example, my use-case is similar to what Mike Bostock described in "Why Use Make" [0] when explaining how he uses Make to build out his data tranformation process. Most of my work is data transformation/small-scale ETL, but I just haven't been able to get into Make beyond trivial work, and I often end up writing things in Rake (Ruby).

So I was wondering if other devs had tried using Buck/Bazel for everyday hobbies and projects, and whether you stuck with the new tool or went back to Make? The portability of Makefiles isn't a high priority for me, and I like experimenting with different systems for my own projects.

[0] https://bost.ocks.org/mike/make/

I tried Buck and found it to be a poor clone of the Google build tool it's based on. If you're interested, go straight to Bazel -- it's the real thing.

Edit to add: here's a specific complaint. To run arbitrary commands and shell scripts, you use genrule(), but in Buck a genrule can only have a single output. I used Buck to preprocess and organize the assets for a game, and that restriction made it very awkward.

It's been a while since I used Buck, and it looks like this has improved a bit -- you can now output a folder of files: https://buckbuild.com/rule/genrule.html

Here's the top-level Bazel BUILD file of the open source project I work on at Google, if anyone wants to see an example for a good-sized (~100K LoC) codebase: https://github.com/google/nomulus/blob/master/java/google/re...

Each subdirectory below the root also has its own BUILD file, where you can see the individual packages being compiled.

These build systems become super handy once you're working on a large tree of many different software projects. For small projects not so much. It's probably easier to use the default build system of your programming language.

When make fails me for personal stuff, I switch to ninja [1]. The Chromium people use it in their Bazel-like build system. A hidden gem I recently discovered is doit [2]. I found this one incredibly helpful when I had a build with tons of 1:N, N:1, N:M dependencies.

[1] https://github.com/ninja-build/ninja [2] http://pydoit.org/

Thanks for the suggestions! That "doit" looks like it's simple enough for my fairly pedestrian needs. I wonder if my aversion to Make is that I don't spend enough time thinking/making Makefiles, but having something a little friendlier and simpler -- at the cost of ubiquity/portability -- might be the kind of training wheels I need.

Edit: FWIW, I posted doit to r/python and someone suggestion Snakemake, which also looks excellent and well-maintained: http://snakemake.readthedocs.io/en/stable/

> A hidden gem I recently discovered is doit [2]. I found this one incredibly helpful when I had a build with tons of 1:N, N:1, N:M dependencies.

> [2] http://pydoit.org/

Thanks for this! I need a build tool for a Python based data analysis project ("micro ETL"?). Doit looks like it will work nicely, with the bonus of keeping everything in Python.

So far I've also always found `make` to be the best option for what I need, especially because it's already available everywhere (albeit sadly in very subtly incompatible versions).

Anyways, one thing that has always bothered me about make is that it so much depends on file modification dates. Imagine if it would instead use a very optimised hashing algorithm over the input content. Content can be a file or any URI, so it uses wget/curl and ssh as a dependency. Hashing is optimized such that it fails early - e.g. hash in n kB increments, mark as new and return if changed.

Imagine how well this would now suddenly integrate into our modern web service based landscape. You could hook together services quite easily:

    all: report.csv

    report.csv: ${customers_json_uri} ${earnings_json_uri}
        ${report_executable} $^ > $@
Has anyone built something like this already?

Buck (and Bazel) essentially do what you want and more:

- Don't use modification times and instead do intelligent hashing (see https://buckbuild.com/concept/rule_keys.html)

- Include the compiler and a bunch of other stuff in the hash. This means you can share cached artifacts safely via a http cache (https://buckbuild.com/concept/http_cache_api.html)

- Can refer to remote files (see https://buckbuild.com/rule/remote_file.html)

- Can use any script to do anything as long as it has a single consistent output (see https://buckbuild.com/rule/genrule.html)

Because their model is so much better, you get crazy fast local caching and remote shared caching of artifacts.

Please don't use make, the newer build tools are better in virtually every way.

(disclosure: my team created Buck when I was at FB)

I have briefly used https://github.com/Factual/drake for this.

It is a bit unpolished and still a work in progress, but has some features that are useful for data science workflows that most software build tools do not cover:

* Steps can produce multiple named outputs (for example full.csv and summary.csv), dependency graph can branch in both ways,

* Outputs that are directories with many files,

* Built-in input/output file sharding (for example by date or month) and HDFS support,

* Lots of control over which steps to execute: not just "rebuild target X and any required upstream dependencies". Can select a target and all that depend on it downstream (if you know that a remote file changed), skip rebuilding some steps, etc,

* Can put very short (text processing) scripts directly in the Drakefile in Python, Ruby.

I'm not commenting about Drake or its usefulness, but concerning (GNU) make :

* you can specify several inputs & outputs per rule

* you can specify which script interpreter is used ($SHELL), and python can be it. See https://www.gnu.org/software/make/manual/html_node/One-Shell...

* downstream make can indeed be interesting. however you can try to build all ultimate targets ans let make sort out which ones can be avoided

* fs support is best done at the os level (https://wiki.apache.org/hadoop/MountableHDFS, not sure about timestamps)


* "everyone" "hates" gnu make, which means everyone knows it.

(edit :format)

AFAIK, you can't specify several outputs per rule in GNU Make without relying on horribly complex workarounds. It's actually the feature I miss the most in it.

Is this too complex and/or insufficient? https://www.gnu.org/software/automake/manual/html_node/Multi...

That's the kind of workaround I had in mind and is probably sufficient, but yes, ~25 lines of cryptic and error-prone interleaved GNU make and shell script just to express multiple outputs correctly is way above my personal threshold for “too complex” or even “sane”.

That doesn't really look like "handling it" by any sensible definition. That looks like a huge kludgy workaround, almost like user implementing a build system inside the build system. (I mean, wtf, asking the user to implement a lock-and-set operation inside their build?)

GNU Make works intuitively in pattern rules, but differently for normal rules? Uh oh.

It even has builtin implicit rules. Enabled by default. Checking for the existence of files that are not found in any new project for maybe ten or twenty years (such as RCS and SCSS version control files).

On. Every. Single. Run.


But I love the built-in implicit rules. I use these frequently.

Often, I write a little test program in C or C++. I name the file "foo.c". Then, to build it I simply type `make foo`.

Sure the rules may be a bit stale, but there isn't much software i use today that I could use in a near identical manner 40 years ago. Make has truly stood the test of time.

Closest in spirit to make, and does essentially all you want to do is "redo", concept drafter by djb, implemented independently by apenwarr[0] and by others. apenwarr's version has a super short "do" script (a couple of hundred lines of portable sh IIRC) which uses the same configuration but does a full rebuild each time.

The two weak points are: (1) multiple outputs from a compilation step (yytab.c yytab.h from yacc) is not properly supported, and (2) no windows support. Other than that it's the perfect minimalistic make replacement -- what make should have been.

Additionally, there tup[1]. With some assumptions about the build process that usually hold in non-distributed builds, it is the fastest, simplest make replacement; You just write a list of commands that builds your final outputs, and by tracing the processes it figures out exactly what needs to be done next time -- nothing more, and nothing less.

[0] https://github.com/apenwarr/redo [1] http://gittup.org/tup/

Buck (and Bazel) both have rules to run and cache arbitrary rules/scripts as part of the build, assuming they produce one output:


Some other benefits that may or may not be important:

- Use the same build system for iOS/Android/Cxx/Python/D, etc.

- Integrates with editing in IDEs.

- Encourages modularization...more modules = more cache hits = faster builds

I have used Bazel (on Mac OS) to build a small C++ project. The nice thing about it is that it can be easily configured to download and build third-party dependencies from github. I have used this feature to add gflags, protobuf, and googletest dependencies to my project. That being said, it is still not perfect. For instance, I was not able to build GzipInputStream and GzipOutputStream (see e.g. https://github.com/google/protobuf/issues/2365).

I've often found myself in a similar situation as myself where I just can't seem to enjoy using Make and never make it past trivial things. For that reason, I'll shamelessly plug what I think is a viable alternative: the Taskfile.


Funny, I read that article and got excited that I finally know how to use Make. Then I looked at one of the examples he pointed to, and discovered he abandoned it in November last year :P


Buck has been open source for a while! It's especially popular among larger companies who have a ton of mobile developers contributing to their apps, but I'm not sure if FB has ever publicized who exactly is using Buck. Other companies have been contributing to the Buck ecosystem, too, though, like https://github.com/uber/okbuck

It looks like they are starting to put together a list of who is using it:


This looks like a clone of Google's Blaze/Bazel: https://bazel.build/

Blaze predates Buck but Buck was open-sourced before Bazel. The two tools also have different origins (Blaze for server applications and Buck for mobile when I was at each respective company).

Buck was started after former google employees at facebook wanted to use something like blaze (ended up being called bazel). Kinda like a lot of other things they copied from google at facebook. Dremel -> presto, etc.

What are some other things apart from dremel?

Such a valuable post. Thanks for the side-by-side comparison!

As others have mentioned, Buck was open-sourced before Bazel but obviously inspired by Blaze.

There's also Please, which we wrote as a Buck replacement before Bazel was open sourced: https://github.com/thought-machine/please

(disclaimer: I wrote most of it...)

Pants is also a popular build system (started by an ex-googler?).

Pants are also a thing you can wear on your legs.

Leg hats. Dan Grover knows what's up.

Or your head.

> "Buck is a build system developed and used by Facebook."

I have really, really grown to resent this culture of proud and unabashed cargo-culting that we've arrived at in the open source world. Why is this the first sentence describing a new project? Why do we need a Facebook™-approved build system? Does that somehow make it better than the others? And why does Facebook need their own build system? Was the existing ecosystem technically insufficient for them, or was the issue a legal one?

Whenever I make this point in dev circles, someone will reply, "They serve X amount of visitors a day, so they must know something!" Well, they also have a firehose of ad money pointed at them 24/7.

It does actually make a difference. Contrast:

> "x is a build system developed by me as a side-project that I might drop at any time, and its only production use is to compile itself and a hello world app"

Not to say people's side projects aren't useful, but there's only so many hours in a day, and there are thousands and thousands of open source projects, so you need some way to evaluate what's worth looking at. Knowing it has more than one person behind it, some time in field (so it's not 0.0.1 alpha), and is actively being used (you're not writing the first production code using it) goes a long way to separating it from the pack.

This does not logically follow. Why does facebook know best what kind of build system is suitable for my needs, given that I'm probably nothing like facebook? Why does George Foreman know how I want my grill?

I understand the human need to optimize attention, use basal heuristics to weed out unattractive options, and nurture a need to belong, but this is not logic. It's a rationalization.

It's also why this works, and why facebook (and every other major tech player) does it. It's a brilliant marketing and recruiting tool and a way to insert influence under the guise of open benevolence.

Hidden out there is probably a better tool for your needs. But now you'll never find it.

> I understand the human need to optimize attention, use basal heuristics to weed out unattractive options, and nurture a need to belong, but this is not logic. It's a rationalization.

There is an underlying logic though when you frame it in as a time vs reward problem. I could spend a few months evaluating all of the various build tools out there and find the perfect fit for my team. Or I could spend a few days evaluating the projects with the most support (be that large companies, large communities, whatever). I'll concede that I can't guarantee it's the _perfect_ fit, but it's likely good enough when you compare the opportunity costs.

> Why does facebook know best what kind of build system is suitable for my needs

well, by definition your needs are yours, and only you know them, right? perhaps that is your point... but then this question would seem to be malformed, or perhaps disingenuous. why does facebook need to know "your needs" to develop a good build system? maven developers don't know "your needs" anymore than make, cargo and all the rest.

> I'm probably nothing like facebook?

it's not clear what it exactly means to be "like" or "nothing like" facebook, without further context. differences with your understanding of facebook's operational requirements do not necessarily translate to same or similar differences with their software development and build needs and practices.

> Why does George Foreman know how I want my grill?

similarly, the structuring of "my grill" bit seems problematic. also, this seems outright unrelated, unless foreman is in fact a very intensive user of grills and has optimized the grill design over time based on his experiences... and you are also, in some capacity, a professional user of grills.

> It's a brilliant marketing and recruiting tool and a way to insert influence under the guise of open benevolence.

even if we accept the premise of it as a "marketing tool", this doesn't imply that it's not a solid tool proven in real world projects. no obvious mutual exclusivity here. it also doesn't mean it's not-benevolent.

it would seem impossible for any company to avoid this pointed finger of yours if they release an project, because any engineer is going to ask "where did this come from" at one point or another. then, presumably, they will have been victims of villainous marketing.

> Hidden out there is probably a better tool for your needs. But now you'll never find it.

maybe, maybe not. how much better? at best, unknown or unresearched tools would seem to have indeterminate benefit (or lack thereof) relative to what is known and researched.

As a complete outsider, the way I see it a lot of the systems at Facebook actually originated within Google, but were never published. Engineers from Google then went to work at Facebook and wanted to replicate the systems they were already familiar with. The only difference is that Facebook actually releases most of these things as open source. So I guess the good news is that when engineers leave Facebook and go to work at "next big thing" they can bring the same tools with them without having to rewrite them again.

I think you are overreacting. The first sentence has to introduce you to what you are reading about, and this seems like a fairly minimal description of what Buck is. There are more detailed points below the first paragraph, and a talk available on YouTube: https://www.youtube.com/watch?v=uvNI_E0ZgZU.

Yeah, sorry. I've just been seeing this more and more in open source and the first paragraph just jumped out at me. Why is is so important that things we use be developed by major companies?

It's an unpoular opinion, but I feel that React et al are just a way for Facebook to gain developer mindshare. Likewise how Angular is a play for Google to gain developer mindshare. The big guys want developers locked into their ecosystems. It's a play out of Microsoft's book.

When you realize that none of these frameworks are even an improvement over jQuery, it becomes clear what the true motivation is.

Well, IMO, React is much better than building the equivalent in jQuery.

I once thought that too. But after working under a jQuery ninja I learned that it's far more powerful than most people give it credit for. More importantly, it's significantly easier to reason about DOM changes in jQuery vs other frameworks because there's no magic at all.

For example, here is a SQLite playground that I built with a friend https://sql.glitch.me/. The entire frontend is only 100 lines of Javascript thanks to jQuery. I'd love to see what the React implementation looks like.

Pretty sure no one really doubted jQuery power.. it's about maintainability, state management, modularization.

If you tinker hard enough, you can achieve those with jQuery, but it'll never be as easy as React.

Also, your site doesn't load (project not found)

I guess we just disagree over whether or not all that is made easier with React. I have worked on large jQuery codebases that do everything you mentioned: they are maintainable, they manage state in a comprehensive fashion, and they are modularized.

JavaScript Fatigue inducing media tries to taint jQuery calling it all sorts of things but that hasn't stopped me. Sure you need to learn and use a handful of useful patterns to build with it, but it's the same situation if you want to make proper use of any framework.

And I updated the url for my demo :)

I don't find that important, but I find that it means

1) This is probably a project I want to check out, if it pertains to my stack, and

2) This is probably going to stick around and not be unsupported very soon.

I would be just as happy if FB hadn't developed it, and instead it was just used by them. (Or any other of maybe a dozen big camps) Or if it was just used by a whole lot of people.

Widespread use is a pretty good metric for what library to choose, when you aren't familiar with the landscape.

It isn't. But if something is run by a major company wouldn't you want to know?

It's important because a lot more gets done when multiple people are being paid to work on bugs & features full-time than when it's, say, one person trying to find some spare time in their evenings.

I think the argument is that Facebook is spending an absurd amount of money on engineering. So, anything that emerges as something that huge pool of engineers find useful and cool is probably pretty well made.

On the other hand. I do think you are right. React suffers from this a lot. React is good. It has a lot of strong competition from smaller groups. But, the attitude seems to be "FB will rub so much money against React! That much money and fame will polish anything. Soon it'll be popular because it's popular. Don't fight it."

Continuing the trend of Facebook creating competing tools more often than persisting with and improving existing ones, which I feel dilutes effort and has fragmented a number of ecosystems.

We've got a couple of Facebook fans at work, which has left us with a number of our systems using different tools to accomplish essentially the same tasks, and no strong case on either side for us to standardise on one of them.

Maybe I'm just being bitter because I've had a bad experience, but it's been a maintenance nightmare for us in the office.

When Buck was created (my team created it!) there wasn't anything that supported what Facebook needed in a build tool. There still really isn't as all the Blaze work-a-likes are focusing on various different needs.

I would argue Buck actually helps to solve what you are concerned about. At Facebook, everything builds with Buck (and Google with Bazel I hear)...which means you learn it once and you know it for your Objective-C library, your Android app, your Cxx service, your python scripts, etc. It really helps to standardize on a build system, and at many companies you can only do that if it supports windows/mac/linux, it supports many languages and platforms, is fast, and is easy to pick up. Buck is all of those things.

> At Facebook, everything builds with Buck

Looking at https://github.com/facebook you seem to be using all sorts of build tools...

What Facebook uses internally in their monorepo and what they publish to open source for external consumption doesn't have to be the same build tool.

Facebook uses fbshipit (https://github.com/facebook/fbshipit/) to take slices of their monorepo and publish them to GitHub.

> At Facebook, everything builds with Buck (and Google with Bazel I hear).

Surely not on Android or Skia teams, unless they internally use something other than what AOSP and Skia use.


Gradle was not the default Android build tool when Buck was open sourced.

Any chance you can compare Pants with Buck?

That's not Facebook's fault, that's fanboyism's fault.

I fail to see how the world gets worse from large companies releasing internal tools as open source.

It is very simple. Instead of improving <insert make edition>, they started yet another brand new one.

From my years and years at large companies, these things start because they don't want to share anything in the first place, and then it's just pricey to maintain.

To be fair a majority of the big technology companies all write their own tooling for many things that already exist because they didn't quite fit the way they needed them to and they had the resources to re-invent whatever they want.

Sure when they open source them it further fragments the market and you always get the rush of "Facebook has almost 2 billion users therefore their tools must be the best!" which further exacerbates the issue but I don't blame Facebook for these issues.

It sounds like, in your situation, perhaps there are too many tools being introduced into your projects. Too often I see developers abuse the hell out of npm and the like to just include whatever they need with zero regard for the newly introduced dependency tree and the new tooling that needs to now be maintained.

When Facebook wrote this, the tooling that already existed wasn't available to them -- ex-Google engineers at Facebook wanted something like Bazel/Blaze, but Google had yet to release that publicly. So this is a Hadoop situation where Google had this internal tool that someone else really wanted and because Google didn't open-source it they wrote their own version.

Isn't there a risk that reimplementing a previous employer's tool constitutes a legal/contractual violation of some kind?

It's almost certain that it is a legal violation and if a lawsuit was brought, I'd bet a million dollars in favour of Google; however, the tool itself isn't the final product and it's silly, counterproductive and just outright malicious to sabotage efforts at building good tooling in a competitor's company. Plus, it will set precedent and Google would have to vet every single new internal tool against what the previous employers of their employees were doing.

> To be fair a majority of the big technology companies all write their own tooling for many things that already exist because they didn't quite fit the way they needed them

Too many engineers syndrome comes to mind too.

What tool would you have suggested Facebook improve?

Saying that Facebook should have improved Make or Maven is like saying Linus should have improved CVS or SVN instead of competing with Git. They're in the same space, but they differ at a very fundamental level.

Some of the tools seems more like improving existing tools such as yarn (instead of npm) and jest over jasmine. However I still applaud Facebook for releasing these tools since they often offer massive improvements over previous tools many regards. It might be more apparent if you're exposed to web development since the toolsets are changing constantly and quickly, some stability would be great in this space.

This doesn't mean that we should disencourage creating new tools fundamentally changing the ways we work with them. You mention Git and I see it as on great such example, I remember with anguish some of the SVN trunks, working with their pseudo tags and the mess which you'd often be introduced to.

Waf or Meson

Facebook doesn't have timetravel, so contributing to Meson would have been impossible.

Ah I didn't know that Buck was created before 2013.

What about Waf though?

TBH, I never heard of Waf before. It looks interesting.

Buck, Waf, Meson, Bazel....apparently Python is the language for next-gen hash-based build tools :)

Buck is primarily Java, just sayin'

But the syntax of your build files is Pythonish.

Just like Make is in C and the build files are in Make.

Facebook and Google have a problem I am familiar with: Not Invented Yet Syndrome.

Typically they face a problem that few people have ever faced. There may be an existing solution, but not in the open. So they have to roll their own.

Others in this thread say Buck was inspired by Blaze. That seems reasonable and I hardly think Facebook can be blamed for rolling their own when that was the only available option.

You might as well blame Google for fragmenting the Hadoop ecosystem by creating their MapReduce framework.

It seems you lack technical leadership in your organization. Don't blame facebook for it :-)

Nope. I totally agree. I don't trust a single thing from facebook. I mean, good on them that they open source their tools, but most everything that comes out is just a huge pain in the ass with very little benefit to adopt, other than we can say we're using open-source facebook technology.

The funny thing, though, is that keeping up with other web tech, you always have to deal with facebooks API's and sharing and whatnot, and its also, usually a huge pain in the ass.

So, like, what's the benefit of this, switching build systems, other than a tick on the resume for someone who wants to get a job at facebook?

I'm all for using existing tools, but not for the sake of itself. If there is a good reason (and everyone in the team agrees), all now obsolete tools should simply be replaced by the new one.

If this did not work in your office, this is most likely a problem of people in your office not communication properly about their choice of tools. Did you have people make a short presentation when they introduced a new tool? Did nobody ever point out that you already have various tools that can (easily/elegantly) accomplish what the next new thing(tm) is introduced for?

I can't speak to the rest of Facebook's stuff, but I think the build tool problem is a special case. Per their docs:

> Buck is designed for building multiple deliverables from a single repository (a monorepo) rather than across multiple repositories. It has been Facebook's experience that maintaining dependencies in the same repository makes it easier to ensure that all developers have the correct version of all of the code, and simplifies the process of making atomic commits.

Having been on the "other side" of the monorepo argument where we tried to make do with improving/extending existing build tools etc. in a rapidly growing engineering org, let me say that Facebook (with Buck), Twitter (Pants? I think?) and Google (with Bazel/Blaze) almost certainly built these to deal with the problem of scaling build management with an ever-growing organization.

The popular model of a dozen or so small repositories in GitHub + Jenkins with Maven/NPM/Rake+Bundler/whatever works fine for maybe a few dozen engineers or more, but one day you wake up and realize there hundreds of repositories spread across dozens of _teams_ and hundreds of developers. Obviously you've then got a big ol' dependency graph between repos to deal with, so if you need to fix something near the root suddenly you need to run off bumping version numbers and/or fixing intermediate libraries all the way down the graph. Plus version incompatibilities between the dependencies of different libraries ... it's a total mess, and it doesn't make for an org that can easily "move fast and break things", so to speak.

So then to avoid paralysis your options are basically either to silo up (this team owns their stuff, that team owns their stuff, don't bother with shared dependencies) or you go the monorepo route. If you do, then maybe you go and pull all your hundreds of smaller repos into a monorepo. Having everything in one place makes it easier to police the dependency issues within the org & makes it easier for a single engineer to deal with those sort of "cascading changes" instead of shunting that problem onto the entire organization. But in exchange for this "agility" you've then got the problem that builds take multiple hours & the associated tools are often highly language-centric (Maven+Java, NPM+Node, Ruby+Rake, etc.). They don't typically make any reproducibility guarantees either.

Anyway, to make a short story really long: at the time FB, Google and Twitter were hitting these organizational scaling walls, making these decisions and building these tools internally, there really weren't any great tools out there for the monorepo use case. I think that's why all these tools have appeared as side-by-side alternatives rather than improvements on one another or to tools like Maven et al.

Whether or not consolidation is warranted, for the folks who have the problems that Buck/Bazel/Pants solve, it's likely to save 'em a hell of a lot of time, effort and money IMO. It's a good thing that they have been published, even if the value's maybe not immediately obvious.

This. Also, I think that the build system itself is just the tip of the iceberg. At least in Google's case it has recently been very nicely documented [1] that blaze is "just" one piece of how google keeps velocity high

[1] https://arxiv.org/abs/1702.01715


I'm veering a bit towards the opposite: Some of the Facebook tools look quite nice, but there's a certain amount of taint that goes with their corporate heritage, so if there's a good alternative...

My name is Michael Bolin and I created Buck.

When I started the project, Buck had one specific goal: to make Android builds faster (https://youtu.be/CdNw6mRpsDI). At the time, the recommended way of building Android (from Google) was to use Ant. So when someone points to Buck as an example of "creating competing tools more often than persisting with and improving existing ones," I'd like to point out that you can't fix Ant if these are your issues with Ant:

* It is unsound. * Because it is unsound, it is irreparably slow. * It uses XML as a build language.

Yes, in July 2012, there were a number build systems on the market (though Bazel was not one of them, but Pants was), and none of them focused on building Android. And even if they did, few (if any) software companies were building an Android app as large as Facebook, so it was unlikely that anyone else was going to design for our scale.

It also wasn't just about build times, but about how I wanted to see us organize code in our repository. At the time, there was a flat list of folders in the Android repo, each called lib-something. This drives me insane because you inevitably end up with two (or more!) people creating com.facebook.common.StringUtils, each in their own lib-something. (It's also annoying to `ls` this "lib-" directory over time.)

In contrast, Buck/Bazel encourage the use of a unified tree, but still encourage fine-grained modularization (which is key as your build graph gets very large). This has been shown to scale to extremely large monorepos at both Facebook and Google.

Finally, by having total control of the build system, we were able to build in all sorts of cool tricks to build Android very fast, both in the large and in the small: https://youtu.be/Y9MfGS3qfoM. I don't think there is any other build system we could have decided to work with at the time to achieve these gains.

Buck has since evolved to build everything else at Facebook. This is not because the Buck team set out to conquer the world, but because people internally wanted the benefits of Buck for their builds. Building an alternative toolchain to xcodebuild was a mammoth effort (and one for which I take no credit). Having one build language for a heterogeneous collection of programming languages in a monorepo is no small feat.

Finally, to the people who believe "The big guys want developers locked into their ecosystems," I have news for you: the Buck team is not offended if you use Bazel, Gradle, Make, or anything else. Buck is open source because we wanted to share it with the community, not dominate it. Like many of you, people are excited to show their work and learn from others.

I think when you say "make Android builds faster", you mean "make Android application builds faster" -- as opposed to making Android operating system builds faster. Those are two very different things, and for the uninitiated the casual use of language here is confusing. The Android operating system has never been built with ant, but historically was built with make until around Android N when that team started migrating to ninja-based builds instead.

I got confused by the exact same language 5 years ago when one of the Apache Groovy project managers (before it joined the ASF) started repeatedly saying "Google have now chosen Groovy and Gradle for building Android." I didn't know if they meant building Android at Google, or as the default build system shipping with their (then) new Android Studio tool.

Buck is a great tool but doesn't work on windows, where bazel is now starting to support. So for crossplatform builds, either GN or bazel.

The only thing that I'm aware of these days that doesn't work on Windows is C++ code (but it works if you are building for Android on Windows). It's even covered in the getting started guide: https://buckbuild.com/setup/getting_started.html

Yea that's what I meant for crossplatform builds when I compared it with GN (c/c++ only, production-ready) and bazel (multilang, unstable).

Are you a member of the buck team? Your new account only has activity on this submission.

Wow, so many build systems got mentioned here, when do people have the time to check them all out? I stick with gnu make because that's the evil I know, not because its the best tool imaginable...

Not everyone here is an expert in a particular system yet. If a build system runs 20% faster than what you're currently using, and you plan on using it for a good few years, the overall time saved is not insubstantial.

Depends on the language really. If you work witht java, make is simply not enough.

Or just use Nix as your build system.

Surprised no one else has mentioned nix, because this seems very much inspired by it. As someone who uses nix extensively this is interesting but doesn't seem as powerful or general.

Can somebody compare this to Gradle?

I mean somebody who has actually used both of them (and not read some versus blog posts).

I think the best people to speak to this would be the Uber folks, as I believe they still use both via okbuck (https://github.com/uber/okbuck):


I would recommend gn and ninja. It's the chromium build system. The make file is generated in less than a minute with gn, and ninja does a good job with incremental builds. It's also been around for a few years so it's been proven.

Is there a watch command to detect file change and kick off the build steps?

Buck has a daemon that does it automatically. Often when you go to build it takes 0 seconds because the daemon has already done it before you get to it:


Note the daemon is automatically started when you `buck build`.

(disclosure: my team created Buck when I was at FB)

Although the daemon doesn't automatically build things. Buck does use watchman (https://github.com/facebook/watchman) to watch files though, and it has a way to run commands when files change: https://facebook.github.io/watchman/docs/watchman-make.html

Tools like Make are also useful for simple data analysis workflows, and I'm curious to hear any thoughts from any Buck users as to whether it would be useful in those cases too.

Great! There's also a Lua-based build tool.


Just what we need.... another build tool!

I sometimes suspect that the build tool ecosystem would be very different if only Makefiles allowed soft tabs.

Wonder if this can/will replace CMake for cross platform builds.

As a C/C++ build system it's severely lacking in features. CMake is orders of magnitude more useful.

Doesn't support windows. You're better off with bazel or GN.

What about it doesn't support Windows? Not disagreeing since I haven't used it much and I'm on Linux, but they have "Quick Start" instructions for Windows.


I don't believe c++ support exists yet for Windows (although it works if you are targeting Android).

How do you in 2017 come up with a build system that doesn't even run on Windows.

They made buck for themselves where their infrastructure runs on (*nix). Google's internal build tool (before bazel) was the same. Now chromium's GN build tool was made for their product that was targetted to run everywhere, which is why windows support is good.

how is current_year different than last_year?

Do such tools have any quality/security standard?

Does not support Swift at the moment.

It does, although it may not be 100%. I've seen lots of PRs going by that fix problems with Swift support.

Nice build output. Unless you want to look over it afterwards that is.

A couple of quick notes:

* It also writes build output to an output directory

* It detects a tty and does the right thing, so on CI you get a linear log

* In the output directory it also includes timeline graphs that can be loaded in Chrome's devtools traceview (https://github.com/catapult-project/catapult/blob/master/tra... and in Chrome proper).

* The tracing can be viewed live https://buckbuild.com/command/server.html

(disclosure: my team created Buck when I was at FB)

Sweet! Looks like a great project, congrats!

Anyone care to compare Buck vs pants vs Bazel/Blaze?

Or perhaps write out your individual experiences mentioning:

1) Team size

2) Repo size

3) Repo programming language distribution.

4) development/Testing/build/publishing strategy

5) Of course, the actual experience

So much complexity to put some pictures on a screen and have people click 'like'.

It's dizzying.

I wonder if build-system complexity is an artifact of reducible complexity in other areas.

Perhaps the next time we develop a language, it should comprise of it's own build system that doesn't require any configuration, or rather minimal. To the point wherein we didn't need to think that much beyond the obvious.

I think a few people had that same thought, and then invented Golang. Love it or hate it, building a typical go app, or even a suite of apps, is pretty darn simple.

If you can get by using only go for everything, you'll have a great time. But complexity starts to become unavoidable when requirements move beyond single-language ecosystems. At some point simplicity can end up costing more.

How come this tool became no.1 trending on HN?

I agree with the comments in this thread, and I add that Facebook knows fanboys are stupid and there are a lot of them, so they try to take advantage of it.

> Buck: A high-performance build tool

And in the title you read "a fast build tool", like yarn, as soon as it was released it was the faster one.

F: let's use yarn/buck G: why? F: cause it is faster G: Did you already tryed it? Did you measured or benchmarked it? F: No, but Facebook claims it is fast, come on!

By the way, sometimes yarn does not work and you need to add a file to manage it. Furthermore facebook is using the npm registry, do they pay for it or support it?

Other than that, thanks to Facebook to bring awesome tools to the public, like React.

For my needs yarn was a drop in replacement for npm and it is faster, 5s vs 17s for a fresh install but critically it's reproducible, I get the exact same output in node_modules every time I run it and that alone was worth the switch.

As for using the npm registry (by default) so what? Why would the npm folks care, MS uses it as well with vscode ans its automatic resolution.

Agreed that yarn is faster and reproducibility is critical, but in case you aren't aware, you can have reproducibility with npm too by using the "npm shrinkwrap" command.

Updating a single package version in a yarn.lock file is much easier than updating a single package version in an npm shrinkwrap file, in my experience. With yarn it's just a single command. With npm shrinkwrap, you have install everything from the current snapshot, then install the package you want to update, then run npm prune, then regenerate the shrinkwrap file, then look through hundreds of lines of mostly irrelevant diff to make sure that it did what you wanted it to.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact