The reason the ecosystem maturity is so important for bazel is because its design encourages complete reimplementations. The rust packaging rules for bazel reimplement a subset of cargo's features from scratch. The docker rules reimplement image building from scratch (they don't actually use docker). The (Google maintained) python rules shell out to pip, but it's got a ton of problems, some packages are unbuildable, and it busts the cache constantly and redownloads everything.
If you're using C++ or Java, I can heartily recommend Bazel as an improvement to your current tool. If you're using anything else, hold off until the ecosystem is more mature.
I can believe all of these reimplementations are worth it for hermeticity, but I seriously doubt most shops need the extreme hermeticity Google does. It's a thing that really matters at gargantuan scale.
I maintain a ruleset and have been on top of Bazel upgrades for a few months. My experience hasn't been so bad. There definitely are a lot of deprecations, and some code changes required, but it was pretty manageable IMHO.
With that said, I agree that some stacks are more first class citizens than others. But I feel this is more due to some stacks' departure from the C++ style of build management than an issue of Bazel changes. At our company, the Go org has had a pretty good experience with migrating from Buck to Bazel. For our web org, the main challenges have been that Bazel doesn't really lend itself well to dealing with codebases where inputs and outputs co-exist in the same filesystem tree (and this is incredibly common in the web world with things like babel transpiled files, yarn offline caches, lerna, etc) or where the ecosystem has a bad relationship with symlink handling (e.g. jest, flow, webpack, --preserve-symlinks, etc)
My experience from before Bazel is that reimplementations were the way to go most times you had projects with multiple languages, because the alternatives lead you down dark paths—calling into a separate build system suffers from the same problems as recursive make.
What excites me about 1.0 is the possibility that the third-party rules can mature. Its not too bad if you only need a couple sets of third-party rules but my experience is like yours-when a new Bazel release came out, you had to figure out when to upgrade based on your dependencies on third-party rules. If you had several of these, it made upgrading Bazel infeasible.
The biggest drawback that I could see was all dependencies and tool chain needed to be modeled, but once they were, incremental builds, dependency rebuilds, etc. all Just Worked™.
Getting to that point requires a lot of upfront work, however, which can be hard to justify in smaller groups.
Nix accomplishes bulletproof repeatable builds by sandboxing everything, but I don't believe incremental builds are a design goal (though if you modularize everything into different nix expressions you will in fact get annoyingly hard to use bulletproof incremental builds).
brazil is EXTREMELY agnostic to what happens within any given package build, with the associated tooling primarily focused on dependency artifact discovery. The (IMO) most important component of brazil is the version set, which is a versioned dependency closure that allows for one-off builds (so your auto-build application packages) as well as merges from upstream version sets (where library vendors release their code). they are the glue that makes the distributed, manyrepo software development model at Amazon feel safe and consistent.
If I had my way, I'd combine the single codebase UX of bazel with the multi-code base dependency tracking of brazil, because i think they solve complimentary problems extremely well.
Everything else is tip of tree, and things are automatically re-tested using checked-in tests if they are affected by your changelist. You basically can't submit if you break anything. So in a way, Google doesn't need "versioning". Whatever is currently checked in is good to go.
Tests are required, of course, and a Google reviewer won't let you submit anything if your tests suck. This, obviously, precludes the use of such a set-up in, shall we say, "more agile" orgs which don't have good test coverage.
Blaze (at Google) is also not just a build system, but also an interface to a much larger distributed build and test backend, which lets you rebuild everything from the kernel upwards in seconds (by caching petabytes of build products at thousands of possible revisions), serves up source code views for developer workstations (code is not stored there either), and sustains the scale of distributed testing needed for this setup to work. As a result, nobody builds or tests on their own workstation, and there's close to zero (or maybe even zero, period) binaries checked into Google3 monorepo. If you need a Haskell compiler and nobody used it in a while, it'll be rebuilt from source and cached for future use. :-)
Fundamentally, I think Google got things very, very right with Blaze. Bazel is but a pale shadow of what Blaze is, but even in its present state it is better than most (all?) other build systems.
The distributed, cached builds are effectively the same thing for Brazil (at the package level), with applications, libraries, system packages, all the way down to compilers being built from source. Brazil doesn't have the level of file system abstraction that Blaze/Bazel does, and prefers towards local development using overlays version sets for certain core build tools.
>Fundamentally, I think Google got things very, very right with Blaze. Bazel is but a pale shadow of what Blaze is, but even in its present state it is better than most (all?) other build systems.
I think Bazel gets things amazingly right for a single monorepo even w/o all of the fanciness that comes from internal systems and services, but the mechanisms for managing out of source dependencies is just flat out clunky. I suspect this is because Blaze never had to really solve that problem, and I think Brazil is a much better tool for solving it because it's what it had to do.
I'm not sure I agree. You can reference arbitrary, versioned out of tree deps via WORKSPACE file. You can pull from git at tag or revision, you can pull from cloud storage, maven, pypi, etc. Just about any realistic scenario is supported, and those that aren't yet are easily scriptable using a restricted python-like extension language.
So, please consider the difference between complex and unfamiliar.
given that, as far as i can tell, there's nothing so compelling that it's worth switching over your project or organization. the tooling around maven is so good, and the inertia so strong (for my organization anyway), that there's no real incentive to switch.
we've found that non-maven developers grumble a lot when forced to use it but then find the system, while verbose, is really solid. we are certainly not willing to throw that all away to play with google's latest cool thing.
- We actually made good use of Ant tasks and macros for having convention over configuration build infrastructure
- Maven was still on 1.0 beta releases, with incomplete support for the plugins we required
However this was more than 10 years ago, and nowadays I am yet to find something that beats Maven eco-system, specially given that I don't suffer from XML allergy, rather it is my favorite configuration format.
I cry a little beat every time I have to wait for Android Gradle builds.
Guess what, the upcoming Android Summit is enjoying yet another Gradle performance improvements talk. They can't get enough of them.
There are also community created solutions like https://github.com/johnynek/bazel-deps, and https://github.com/square/bazel_maven_repository.
Disclosure: I maintain rules_jvm_external.
But it's not just speed, Bazel does do true CI: you make a change in a a lib and then you can immediately compile and run tests on anything that depends on that directly or indirectly. I have not been able to do that with Gradle, forget about Maven.
Finally, if you use other languages, with shared artifact such as protobuf files, Bazel is simply amazing: change a proto file and recompile everything that depends on that on every language.
So it depends on (a) the shape of your graph, and (b) the nature of the change being built (e.g., does it invalidate a core library and all its downstream, or is it close to the app?).
The other thing is remote build execution - the high focus on hermetic builds and parallelism makes building on a cluster of build workers extremely powerful, reducing build times, reducing load on developer machines, etc. And there are out-of-the-box solutions, initially from google on GCP to let you do this yourself.
So... is bazel faster? Hell yeah, in some situations. Kinda in others. Not in some others. You need to think through what your development model is, your scale, how you construct (or want to construct) your build graphs, isolation of components, etc.
To my knowledge rules_python still doesn’t support pip3 installs, and from browsing Github, Twitter, and the Bazel Slack it seems everyone is reimplementing their own package management integration because the default is so undercooked.
Right now rules_pygen is the best open source option, but it isn’t fully featured and was broken by Bazel 0.28.
I can't think of any reason you couldn't call the docker tools in a genrule() though, if you prefer.
I've noticed this as well. I'd love to use Bazel for our Python 3 project, but as far as I can tell, the advertised Python 3 support doesn't actually work. There are several issues filed for this, but apparently no progress since 2018. Some people reported workarounds, but none that I could reproduce.
I believe the core is solid, but language support feels alpha at best.
It’s dozens of hours of work, and even then you quite easily can find yourself not enjoying Bazel’s caching features because of pip wheel being non-deterministic. It’s been a hard road.
Any idea if deterministic wheels is on the Python project's radar?
EDIT: Quick Google search answered my question--the first result is an issue you filed here: https://github.com/pypa/pip/issues/6505
In our Java+Scala+Python monorepo we've got our Java+Scala tests caching so that part of our test suite gets a low as 2 minutes in CI. The Python part of our codebase is maybe 5-10% of the codebase but its tests take ~5 minutes in CI because of no caching.
A huge congratulations to the Bazel team for shipping 1.0!
Most other tools seem to be competitors to Make (SCons, Jam, tup), souped-up scripting systems (Ant, Gradle), or configuration toolboxes (CMake). The ones that really stand out are the tools that tackle the problem of expressing builds in a way that’s both expressive and declarative. In my mind, these fall into two families: Gyp and Bazel (which includes Buck, Pants, Please.build, and that new custom thing Chrome uses).
(And then there’s Ninja, which I appreciate for taking a good / useful subset of Make’s features and doing it really well.)
They classify it based on how the build system detects what needs to be rebuilt and how the build system chooses the order things should be built in
Bazel is the open sourced version of Blaze, google's internal build system. Buck is Facebook's open source version of their own implementation of a Blaze-like system. Pants was built as an open source implementation of Buck, before Buck was open sourced.
From a bird's eye view, knowing this history of their creators, I would guess that Bazel is more complicated internally, its API has a wider surface area, and ships with many optimizations included for the use case of building extremely large artifacts with many, many dependencies.
Someone who's more familiar with both, feel free to add details or correct my over generalizations! I've used Pants professionally and tried setting up Bazel for personal projects, but found it was too complicated for my needs.
when you want to write new build rules(to support your specific use case or to support an unsupported language etc):
- in buck you should update the buck code itself(you should touch so many places in JAVA)
- whereas in bazel, you can extend bazel with starlark(subset of python) language and do all sorts of things like supporting whole new language. without touching even a tiny bit into bazel core.
This was the main reason my company is switching to bazel from buck
I’m not sure I get what makes bazel so good. It seems pretty simple to me. You have a bunch of directories with BUIILD files that are each sort of like Makefiles.
Am I missing something? It kind of just seems like a hodgepodge of scripts. I don’t dislike it, but I’m also not seeing anything amazing.
If you use Make long enough, there are some obvious improvements you want. Multiple outputs, rebuild when options change, and easier cross-compiling are the top ones. Various build systems attempt to add these features. In my mind, Ninja is the only build system that added these features well, and it worked because Ninja removed all the other features to focus on just the build process (as opposed to specification / configuration).
If you think about these problems with Make, you realize that it kind of boils down to one big thing: you want your build system to always rebuild when necessary, and you want it to almost never rebuild when unnecessary. (Plus the bit about cross-compiling.)
Other build systems rely on the developer writing the build scripts to just “get it right”. Bazel is different because it sandboxes the rules to enforce hermeticity. In Make, I can include "pear.h" which includes "orange.h", but let’s suppose that "orange.h" is actually a generated source file… now, try writing this out in a Makefile (if you’re a masochist, say you’re cross-compiling). Yes, "orange.h" should be declared as an input to anything that includes "pear.h", but in practice, developers are going to screw it up. At that point you can end up with a build that uses two different versions of "orange.h".
Bazel sandboxes the commands so that any rule not declared to depend on "orange.h" will not be able to open "orange.h" at all. The process won’t see the file at all.
This opens the door for all sorts of optimizations and query features that are simply unreliable if you have to trust that human developers are writing the rules correctly. These optimizations, for large projects, result in radical build time improvements. For most build systems, a shared build cache would come with a risk of bad cache entries, but with Bazel, the risk is substantially lower. It’s also easier to get reproducible builds, which make it substantially easier to do certain types of auditing.
The OP listed noted that hermeticity is required for fast builds, and that humans can't be relied upon to provide hermeticity as a property. How does build2 guarantee hermeticity or otherwise support fast builds without a clean room solution a la bazel?
To give a specific example, Bazel may prevent you from accidentally using a different version of the compiler while build2 will detect that you are attempting to use a different version.
Regarding caching (and distributed compilation), this is currently on the TODO list though a lot of the infrastructure is already there. For example, the same change detection that is used for high-fidelity builds will be used to decide if what's in the cache is usable in any particular build.
While I agree we should mention these points in the documentation (things are still WIP on that front), I don't think cross-compilation deserves mentioning: for any modern build system it should just work. In build2 we simply implement things in the "cross-compile first" way, with native compilation being a special case (host == target).
and then, after a successful build, you create a new file named "foo.h" in a directory earlier in the search path than the foo.h that was used in the previous compile?
In any case, in build2 this is "handled" by not including headers as "foo.h" but as <libfoo/foo.h>, that is, with the project prefix. You can read more on this here: https://build2.org/build2-toolchain/doc/build2-toolchain-int... And the proper fix will hopefully come with C++20 modules.
Bazel is basically cross platform out of the box if one is careful with it: that includes a consistent build organization across platforms, cross builds if configured, and so on. It can build a library declaration for mobile and desktop and embedded and web in a single workspace; try that with CMAKE.
Bazel has a universal package system (e.g. download an archive or git repo) that allows for custom ecosystems (yes some of which are not great yet) to exist regardless of what is normal that ecosystem. This is especially notable for C++ where using CMAKE as a package system is a nightmare, with bazel I can just download any C++ repo off the internet and ignore it's CMAKE file for my own BUILD file. Also notably it's the first build system for C++ that hasn't required me to build or configure boost myself, someone can run bazel build against a repo of mine with boost in it without even knowing what boost is and it just builds.
Which brings me to hermeticy and reproducibility. If one is careful running bazel it always uses the same code for a platform (I've never had or seen weird "on my machine" issues with it which is impressive; reverting/stashing changes has always gotten people a working build again). All of the sources are version pinned, and so on. Getting to hermetic takes some work, but it's possible which is nice. A side effect of all this is my instructions for a bazel project are usually: install bazel, run build; and it just works! Yes parts of the ecosystem suck and break this, but that's a work in progress.
There are other features, like the query system, the test runner, the macro system, the local override idiom, the parallel build. The point is it really is a build tool for whatever needs to be built, however it needs to be built, and not just a scripting language useful for building things.
Disclosure: I work at Google, with Blaze, but not on it, or on Bazel. All opinions mine.
One of my favorite uses of Bazel is in CI/CD. I built a demo which builds the applications, creates Docker images, and then applies a K8s manifest to a cluster. It's OSS now under the GCP handle: https://github.com/GoogleCloudPlatform/gke-bazel-demo
Happy to answer any questions, public or private (email / website in bio).
Is anyone using Bazel to ship cross-platform GUI applications?
By the sake of God no.
The last thing I need to compile Qt is a gigantic framework requiring the JVM.
Plus the fact Bazel has so many side effect that that even quantum physics experiments looks more reproducible: Just try to compile tensorflow and enjoy the fun.
Stick to CMake, thanks.
Because CMake contrary to Bazel do not break its retro-compatibility in its options and config every two minor versions ?
> The latter builds in a clean room, Make does not.
Sandboxing should be the concern of the package manager / deployment pipeline, Not the build system concern. That just makes things redundant and painful to debug.
Lol that’s only recently true. CMake was infamous for breaking compatibility. Anyway, Bazel was pre-1.0 until just now, so of course expect some instability.
> Sandboxing should be the concern of the package manager / deployment pipeline, Not the build system concern. That just makes things redundant and painful to debug.
It makes things specifically very easy to debug. Not everyone enjoys troubleshooting issues on their machine due to their local environment.
(Also, cmake has become the de facto solution for open source libraries, not sure why they would select something else)
The real issue for highly reusable products like Qt is portability of the build tool. CMake works on the long tail of OSes, some of which do not have any usable JRE available. That limits the ability to use Bazel as the only build tool.
For Bazel to be useful to projects like Qt (and other horizontal libraries like openssl, ICU, curl, ...) one would need a system for generating CMakefiles from BUILD files. The developers of those apps could use Bazel to improve build and test scaling, and then generate CMake (or other build tool) files as part of their packaging and distribution. I don't propose that this is an easy thing to do - all rules would need translation into other languages - just that it is a path which may benefit some projects.
Bazel is also a great build system, but yeah, the JRE dependency will obviously make it less desirable for some users. Still, I hope people consider it anyways, because its focus on correctness is pretty great.
Meson will most likely be kept as the GTK, GNOME build system.
It is the build system of lots of projects already, not just GNOME but also freedesktop stuff too, libinput is one.
Everyone on C++ community is gravitating towards CMake, Conan and vcpkg, they won't migrate to something else now.
Scons, qmake, waf, premake, Gradle C++, MSBuild, ...
I know which horse to bet on.
Then again, this is all irrelevant. What these projects all had to offer at one point is historical. Qt has to leave qmake because of qmake. During that time they evaluated many options, including a fairly innovative design called qbs - Qt Build System. In the end, the choice of CMake makes a lot of sense, as it is clearly the best available choice that had ecosystem support. Qt is unlikely to switch build systems again soon.
This entire argument has happened before, but instead of meson vs CMake it was CMake vs autoconf. The thing is, I suspect CMake will still exist even as Meson inevitably continues to gain traction, just as autoconf (unfortunately) exists today.
There is, in fact, room for more C++ build systems, and almost definitely room for better interop. (Meson and CMake have some interop today.)
It is definitely not the de facto solution for open source. After using CMake as a C++ developer, I will never use CMake again. I simply don't have time to write a program in a slow, stringly-typed, ad hoc DSL just to build my actual program.
Sigh, I remember debugging a complex makefile generation issue (it took me 3 days, two guys before me failed) and it didn't make me hate the make tool, I just wished it had better tracing/debugging but cmake that's a different story..
most projects are using CMake nowadays : https://www.jetbrains.com/research/devecosystem-2018/cpp/
In April 2018, I tried to do a quick port of a tiny gsl::span-using toy app to absl::span, but I gave up, because Abseil wanted me to build my app using Bazel and Bazel docs seemed to assume that I already have some context that Googlers would have but I hadn’t.
(I emphasize that this was a _quick_ attempt at a toy program and not about making a serious time investment to learn a tool for a serious project.)
It it possible to do some sort of conglomeration where we could bazel-all-the-things and still publish separate artifacts (jar's, deb's, etc.) for different purposes?
If you use `local_repository`, then you can link whatever's checked out from version control together. This can be dangerous since there's nothing that enforces what version of what works with what, but it's helpful for example, if you want to beta test a new version of a ruleset in a workspace that consumes it.
If you want to be strict about publishing artifacts and versioning, then `http_archive` is the way to go. You can choose your publishing schedule and other workspaces can independently manage which versions of the published artifacts they want to use.
`git_repository` is a middle ground if you don't want the hassle of publishing versioned artifacts, since it lets each workspace reference specific commit SHAs for the things they import.
I'm actually currently leaning towards something like Nix or Guix just because it really encompasses everything...
Gradle is far from "completely undocumented". Unless you mean Android Gradle Plugin, which surprisingly _is_ somewhat documented, although the docs are tough to accidentaly just stumble by
the binary ships it own JRE and starts instantly (uses a server), why is java an issue exactly??
Has anyone gotten a Swift + Objective-C project running on Bazel or should I stay with Facebook's Buck?
Edit: Found the correct links for the Swift/Apple support docs.
I don't like the tooks, but would it be worth the trouble to change to Bazel?
My personal experience is that you can do a gradual migration by working bottom-up, starting with the components that have no dependencies. There is a learning curve to it if you are the one writing rules, and it can take a while to get the hang of it / find where the good resources are.
I would evaluate it on an estimated cost/benefit basis. The primary benefit for large projects is build times. If you can quantify how much time your developers are spending waiting for builds, and estimate how much time you can cut off with Bazel’s shared caches, you can guess how quickly it will pay off or whether it will pay off at all.
If nothing else, you can find a leaf dependency somewhere in your projects and write some quick BUILD files for it. That shouldn’t take very long, if you have a half-day or day to spare.
That, combined with Maven->Bazel being a common migration path for Java project, should mean that your costs to change are incremental.
Another benefit for large projects is test execution time. If you can parallelize all your testing on a farm of machines you can bring your CI time down by an order of magnitude.
What helps is to understand the separate phases of the Bazel build process and look at other Bazel repositories.
To me it’s not as bad as e.g. the mess I remember going through with TypeScript back when you had to write a lot of your own types for libraries you used.
A bunch of the problems I had might be due to out of date js tools, since apparently Bazel has been breaking compatibility a lot pre 1.0. Things I pull from the docs of the main js/ts bazel libraries are more likely to error than work. Will wait a while for things to catch up now that it's 1.0 and check back in next year or something.
Once you familiarize yourself with the concept of rules and targets, you can follow the instructions for whichever ruleset you want to use. The official one for JS is rules_nodejs
Another thing that really helped me cement my understanding of Bazel was to deep dive into Starlark and write my own rules. The examples repo is a great resource for that
So in order to benefit from the two, it would likely require GCB to work with Bazel natively and also to have some powerful state abstractions in order to optimize builds in a distributed fashion.
Bazel, like Bucks and other, try to bring on table a build system / deployment system that is multi-language, multi-platform and developer oriented. A holy Graal that many developer ( like me ) looked for decade and that many (large) organizations more or less tried to do one day (and most failed)
It is a good idea. It is a required tool to improve productivity. However, if the idea is good on paper, in the implementation, Bazel is damn wrong.
- Bazel is centered around "mono-repo" culture, making it much harder to integrated with multi-source, multi-repo, multi-version projects like many of us have. If I have no doubt that it is great at Google, the external world is not google.
- Bazel is made in JAVA, requires the JVM and this is a problem. That make Bazel not a "light" tool easy to deploy in a fresh VM or in a container.
- Bazel mix the concepts Build System ( like Make, ant, co ) and Deployment System like ( rpm, pkgsrc, etc). That makes Bazel pretty hard to integrate with projects that have existing build system, and almost impossible to integrate INSIDE an other Deployment System (usual package manager, deployment pipeline). The problem that Bazel faces with some languages ( python, go ) is a cause of that.
- Bazel venerates and follows the cult of "DO NOT INSTALL": compile and execute in workspace, there is no "make install", not installation phase. If "convenient" in mono-repo, this is often a nightmare because the boundary between components can be easily violated... and you finish by having many project that use internal headers or interface.
- Bazel makes mandatory ( almost ) to have internet to compile. This is a problem, a major problem in many organization (like mine) where downloading random source and binary from the Web is not acceptable for security reasons. Try to run Bazel in a sandbox.... and cry.
- Related to what I said before, Bazel mixes Build system and Deployment system. Doing so, it makes the same mistake that many "language specific" package manager and make uselessly hard to depend on an already installed, local library / component.
- And finally, last but not least... The options.... Bazel throw away 30 years of conventions / naming from the ( GNU / BSD world ) to create its own.... That make the learning curve difficult... Specially with a (up to recently) very sparse and outdated documentation.
I have no doubt that inside Google or Facebook, Bazel or Bucks are amazing.But they have been released too late for the external world in my mind.
Nowadays platform independant package managers like spack (https://github.com/spack/spack), Nix (https://nixos.org/nix/), GUIX (http://guix.gnu.org/) gives 95% of the advantages of Bazel without the pains of it.
2) bazel can do offline compiles, and is actually built to run in sandboxes
3) bazel can act as a build system only and delegate to externa package managers
Look at rules_nodejs for example and the managed_directories + yarn_install/npm_install rule
4) depend on already installed stuff. Toolchains already provide a way to do this, and the android_sdk_repository rule literally requires preinstallation.
Seems to me your making a lot of claims about bazel without having used it.
I lost literally weeks of my life to make Bazel work in external build systems for tensorflow and others. And I am apparently not the only one : https://archive.fosdem.org/2018/schedule/event/how_to_make_p...
Then I don't allow you to tell me "not having used it".
All the answers you gave are "fixes" that have been made after the community complained. Fixes that were sometimes not even documented.
Actual support for the things you claimed is quite good now, and that's my point. You posted a number of claims that are now no longer true as of version 1.0. Yeah, a year ago, things were bad, but then again, it was a project at beta level in rapid development, and pretty much for early adopters.
I aim to follow best practices and mark out any undocumented gotchas.
* Assured reproducibility via sandboxing
* Distributed caching
* Distributed execution (experimental)
* Support for long-lived worker processes
* Static analysis of build dependencies
* Uniform CLI for builds and tests
Build definitions and extensions are written in Starlark, a subset of Python.
If you work in a large repo, especially one with multiple languages, you should be interested in Bazel.
Many of Google's OSS projects now build with Bazel (TensorFlow, K8s, Angular).
Buck and Pants are similar tools by Facebook and Twitter respectively, inspired by Bazel's closed source predecessor Blaze.
It is designed to help give you fast distributed builds with high shared cache hit rates, reproducible builds, and a lot of tools to analyze dependencies.
Personally, I like using it any time a project starts using multiple languages, protobufs, or generated sources.
It's possible, but it's not trivial, and not nearly as smooth as using the Go tools in the first place. Unless you have a hair-on-fire problem dealing with cross-language dependencies or insanely long build times, it will probably not be a good use of your time.
"Can I migrate because bazel is the new thing"
Unfortunately, bazel's future is significantly complicated because of the lack of a repo partner in the ecosystem.
In short, a true single repo experience with bazel would be easily 10x the current one.
Bazel is better than Maven, Pants... you name it.
Think of this like Protobuf or Kubernetes. It's an open source tool used to build things at scale. There will be lots of users and contributors. A cottage industry will spring up, and this will grow well beyond Google.
Android is probably the only platform where almost at every conference there is a regular talk on how to improve build times.
Not even C++ conferences talk so much about build times.
I used to compile Java projects since 1996 until 2004 pretty alright.
And Maven runs circles around Gradle's performance, without requiring a background daemon sucking up at least 2GB, minimum.
Gradle is the only reason Groovy is still somehow relevant and it has to thank Google's Android team for it.
As for the Oracle/OpenJDK remark, it is an open source project, under the GPL with Classpath exception license, it is up to the ones that keep saying that Oracle doesn't do a good job to contribute.
If people use it there will be no deprecation date. You can just fork and continue.
However, it's unlikely to be abandoned.
Companies relying on Bazel would step in and support it. We’ve seen this play out plenty of times before, e.g. Hudson to Jenkins or OpenOffice to LibreOffice, where a popular OSS product loses a corporate sponsor and continues.
I don’t work there, but I have talked to Googlers working on build systems.
I'm trying to sell my team on it, but none of us has experience with bazel. I'm trying to figure out how to do a small poc where we migrate one intermediate thing in the monorepo to bazel and try to prove out how it can take over everything.
- You are going to screw things up a couple times. The documentation for Bazel is not always clear, but you can usually find examples or explanations of complicated stuff on forums somewhere.
- Start with a leaf dependency, something in your codebase that doesn’t depend on anything else. Then work your way up. Just write a WORKSPACE + BUILD.bazel in your root, and then put a BUILD.bazel in the directory for the dependency you are going to work on.
- Look at examples like Tensor Flow, especially for how to handle third-party dependencies (although Tensor Flow is going to do it in a more complicated way).
- Migrate your tests as you go.
- Just run "bazel build //...:all" or "bazel test //...:all" from your project root as you go to make sure things don’t break.
- Undoubtedly it will take a while to develop a good mental model of how Bazel works. For fun, try looking around in the output directories, or look at the sandboxes it creates.
What if I wanted to fork tensorflow and integrate my project into it, would I need the tensorflow fork as something like a git submodule in my main repo, or is there a way to tell bazel there's another repo somewhere else and where it fits into the graph?