My experience of Gradle is Gradle projects require a PhD in Gradle AND intimate familiarity with the tastes of whoever implemented that particular project .gradle files likes to use it. Is Bazel like that too?
When we inherit codebases from other teams, often one of the first things we do is convert it from Gradle to Maven.
I honestly couldn't care less about how flexible and powerful Gradle or Bazel or some other build tool is compared to Maven. Maven is simple, the same in every project (this is huuuuuge), just works and gets out the way.
If you enjoy using gradle for its customizability and flexibility and power -you are doing it wrong. You have created an overengineered monster I would dread to work with. I don't want to learn groovy, I want to develop applications.
i used to hate maven - i was a big fan of Ant. But as time went on, i saw how useful maven was. Being _exactly_ the same in every project, requiring the same project file structure, requiring the same build steps, and the same build commands, etc.
People who own projects seem to believe their project's build is special and require custom stuff to make it work. It rarely is actually the case. Just shoehorn your project into maven.
Hard to overstate how useful that is. Configuring something in a Maven build may be verbose or sometimes hard, but most of the time the build just does its thing, and all developers know at least `mvn clean install`.
This, I was quite happy to try out gradle when it appeared and when I read few tutorials it came to be: This is just Ant all over again, but this time instead of xml I get groovy.
Maven does one thing great: It forces you to use their build steps and every project has exactly the same steps.
Gradle is like Ant or CMake.
If you don't need any plugin - then, yes, it's simple.
But once you need specialized plugins it's no better than gradle/bazel. The latter two lets you add some logic in the build scripts - so at least it's visible to users. If you need special functionality in Maven you have to write a plugin or gather a few plugins. And when it breaks down (as all tools do) -- it's difficult to find the code and understand the problems.
And a language that has sometimes strange design choices.
Groovy is like PHP, if I need to upgrade from Java it would be Scala. At least I get a superior language for also magnitude slower builds.
Recently tried to find the correct and canonical way to set up a new, small spring boot project for a customer. No one could tell me how: "we just use spring initializr and copy the relevant files" From where? Uh, oh. So I tried to find a canonical solution. Did some research. At least 14 variants..! No clear winner.
Nobody really understands it it sees but at least they don't have to use the dreaded Maven.
(For those who don't know, Mavens main disadvantage is that it is written in xml. Since Maven files frequently gets over 30 lines long - especially if people don't know what they are doing - and (parent projects) needs to be updated twice a month on average because of updates to dependencies this is a living nightmare to some people ;-)
Might contain traces of both sarcasm, irony, exaggerations and other literary tools. Those who get it, feel free to laugh. Those who don't: After ~15 years in the field I've not seen anything working as well as Maven on the kind of projects and teams I work on and I also don't think I've seen any other project getting as much undeserved flak.
I would say that this is Maven's most obvious disadvantage. Its main disadvantage is its inflexibility.
I'm working on a little library right now that is sort of a facade for two possible backend libraries. Some code is shared, some depends on backend A, some on backend B. I want to separate the three kinds of source code, compile and test each against the correct dependencies (none, A, or B), and then put all the class files in one jar file.
In Gradle i would define some extra configurations for the A and B sides, then customise the standard jar task to include their output.
What would i do in Maven?
My Maven knowledge is pretty out of date, but i think the Maven answer would be "don't do that, create three modules and build and publish three artifacts". Or, to put it another way "our tool can't do that, so you should make life worse for your users, but we're going to insist that our way is actually the proper way to do it, for mysterious quasi-mystical reasons".
Typically customization of maven goals is done via profiles. You can modify the configuration of the plugins depending on which profile is active. Though I'm obviously not sure if it will solve your problem without knowing exact details.
In the C++ world we have exactly the same problem with CMake.
"Modern cmake" (i.e., target-based projects made possible since the release of cmake v3.0) is terribly easy and straight-forward, provided that your devteam understands the need to stay in their lane.
project(), add_executable()/add_library(), target_include_directories(), set_target_properties(), and add_directory(). This gets any MVP working, and any project 95% where they need to be.
Add in set() to define helper cars like proj_SOURCES, proj_HEADERS, and proj_INTERFACES, and option() to get fancy cacheable settings.
Cmake only becomes a problem if devteams have no idea on how to write simple code, and can't help themselves turning any hello world into a big bowl of spaghetti code. You don't need functions or macros or anything at all, unless you want to waste time feeling smart for writing towers of hanoi in cmake, or shoving square pegs into round holes.
In spite of having about a decade of experience with cmake projects under my belt, the only place I ever defined functions in cmake was to add boilerplate code to add unit tests to crest by just passing the source file name, and even that was not worth the trouble.
You can write stuff as convoluted and unmaintainable as you'd like, but that's on you, not the tool.
In this case, it's also on the tool. CMake is a bad tool. Unfortunately it's the least bad one available for that task. I've rambled on this topic before. 
My most recent adventure with CMake was with its SWIG support. This was as painful and error-prone as everything else in CMake, and I've used CMake for years.
I completely disagree. Cmake is an exemplary tool which achieved the status of de-facto standard due to its design and simplicity.
Queue in the popular saying about how bad workers always blame their tools. If a worker does awful things, that's a reflection of how bad the worker is, not the tools.
Case in point: complaining about cmake because of its rudimentary BASIC-like scripting language is a huge red flag. If your idea of using cmake is to write scripts, you're either aiming to do something extremely exotic and exceptional, like adding macros that magically support frameworks like Qt or conan, or I dare say you do not know what you are doing. At all. I repeat: you only need to call about half a dozen cmake functions to declare build targets and define it's properties, and you've already met over 95% of any project needs. If instead you're in the business of mangling makefiles with BASIC you should really take a step back and reconsider what you're trying to do because odds are you're doing it very wrong.
CMake is miserable to work with, but has excellent cross-platform support, able to target many build systems and IDEs. It also has a wealth of package-detection libraries. It was first-to-market with this combination of virtues, and it's difficult for competing systems to beat CMake on those terms. These are much of the reason I continue to use CMake, the other main reason being that fragmentation is harmful in itself.
> Queue in the popular saying about how bad workers always blame their tools. If a worker does awful things, that's a reflection of how bad the worker is, not the tools.
You seem to be suggesting there can be no such thing as a poor tool.
When someone expresses dissatisfaction with a tool, and lays out numerous specific complaints born of their experience, that does not entitle you to imply they must just be incompetent, which is what you've done. You might refrain from borderline personal insults in future.
You've not substantiated the If a worker does awful things premise. I've written several well-functioning CMake scripts, always following the current best practices. My point isn't that it's impossible to do this, but that it's painful to achieve.
> complaining about cmake because of its rudimentary BASIC-like scripting language is a huge red flag
Writing scripts is the only way you can use CMake.
> If your idea of using cmake is to write scripts, you're either aiming to do something extremely exotic and exceptional, like adding macros that magically support frameworks like Qt or conan, or I dare say you do not know what you are doing.
Again, CMake is always script-driven.
Anyway, this response is misguided. The CMake scripting language is not intended only for small and simple scripts. I already addressed this point last time.  CMake package-detection scripts are enormous, and fail to follow any kind of standard pattern. Just look at this official first-party CMake script and tell me this is how it ought to look.  It's a trainwreck.
> you only need to call about half a dozen cmake functions to declare build targets and define it's properties, and you've already met over 95% of any project needs.
If you make the tiniest error anywhere along the line, you will get mysterious failures rather than helpful error messages. You might even get failures which only affect certain platforms. I've encountered this several times, as CMake fails to properly abstract away all the differences between the Windows and Unix build+link models, especially regarding shared libraries. Once you've got your script working, it will look as if CMake properly abstracts away these differences, but the existence of such failure modes belies the illusion.
A good metabuild system would aspire to make invalid states unrepresentable, or would at least help defend against common mistakes. CMake doesn't even try. It's fragile footguns all the way down.
A recent example of a failure mode that simply wouldn't happen in a better system (I mentioned another previously at ): I'm seeing CTest passing the string WORKING_DIRECTORY to my test script, as a command-line argument. Presumably it's incorrectly handling the WORKING_DIRECTORY marker in the argument list of add_test. Am I doing something wrong? Perhaps, but not that I can tell. Presumably this failure mode is only possible because of CMake's awful approach to parameter-handling, and would not have occurred in a well-designed system.
> If instead you're in the business of mangling makefiles with BASIC you should really take a step back and reconsider what you're trying to do because odds are you're doing it very wrong.
You're blindly assuming I'm misusing CMake. When I write CMake scripts, I always try to follow the current best-practices. Again, I do not appreciate you assuming that I have no idea what I'm doing simply because I disagree with you.
XML. Ugh, the XML.
The backing company was a bit too eager to sell artifact repository software.
The lingo was pointlessly obscure (coordinates? artifacts? "project object model"?)
It didn't offer a couple key options (local jars without an artifact repository) so it was too inflexible.
Gradle did fly off the handle too quickly, breaks wayyy too much stuff with each release, has way too many releases and examples that don't work anymore... but on the other hand you can usually cobble together what you want from a couple sample builds and some stackoverflow.
What the JVM community probably needed was a YAML-based Maven with a bit more flexibility than maven provided.
Of course under the hood it's just a very complicated 'make', like every build tool is.
This likely just means unnecessary complexity for a simple Java project where the development just happens in IDEA (or Eclipse or even NetBeans) and the IDE builds and checks everything, and ... at release time someone just builds a package and it's done.
But in case of a project with a lot of services, dependencies, there's arguably some advantage to setting up Bazel to help with development. (Though the build/bzl files all look ugly and unintuitive, a bit like CMake :[ )
E.g., a simple project with no outside deps might be like (and I typed this out on my phone):
java_binary(name = "grep_bin", srcs = ["grep_main.java"], deps=["grep_lib"])
java_library(name="grep_lib", srcs =["grep_lib.java"])
Things get more opinionated when you pull in outside libraries, but even then bazel makes it fairly painless and integrates with other build systems where needed.
Take a look at the docs here for more info: https://docs.bazel.build/versions/master/be/java.html
It was build to solve a problem of building software at scale across multiple languages, platforms in a single monorepo. And it's great at that. It becomes limiting as your use-case steps away from this "monorepo, across languages" approach.
For example the Scala rules have specs2 (a unit testing lib/harness/framework/thing) 
I have no idea how this all works in practice.
XML - if for whatever reason you decide to make a good product unpopular, just add XML
Its a tool that’s perfect for Google.
I think if you're _not_ using JS (and to a lesser extent, Python), it feels like a no-brainer. Unfortunately it's super oriented around getting build artifacts, which doesn't work so well with Python/JS so you're going to have to fight stuff a bit (though the Bazel Slack channel is filled with pretty helpful people).
Overall I've gotten way madder at the other tooling for being "wrong" more than Bazel itself (lots of people really do random stuff when faced with symlinks for example).
I don't know what the "right" amount of time is, but if you current have 30min+ CI times, this can cut stuff down way more and get you much closer to a nice CI/CD flow.
The thing that kinda kills me is that it feels purposefully obtuse at time (in particular about obsfucating the custom rules stuff, despite it being _the_ way to wrap existing projects nicely). But all the Xooglers who end up writing Bazel clones somehow end up with even more obtuse tools...
* Perform incremental build a magnitude faster than Maven, zero changes build finishes immediately.
* Can have global caches for all builds across organization, so that a workstation build can automatically build incrementally upon a standard build, massively reduces build speed for big monorepo.
* Very fast, incremental tests execution: test results are cached, so only tests affected by the changed code are run. Test executions are parallel by default.
Some other things:
* Pull in dependencies directly from a git repository, without having to publishing it first like in maven, this can be a plus or a minus.
* Cross-languages build: you can have each of your module programmed in a different language, declare dependencies between them and use Bazel to perform incrementally build quickly and reliably.
That said, maintaining a Bazel build system is also a magnitude more complex than a Maven one, especially if you're not using one of Google main language: Java/C++. Plugins for other languages are of varying quality.
I see benefits of that in projects that build hours, but in every one of my maven projects (non-hobby) I always do clean before package/verify just to make sure the project builds.
mvn clean verify
And I'm certain it will work in CI also.
However, Bazel cache is a lot more robust, so you almost never do clean build. The only time I had to perform a clean build was when I accidentally upgraded my C++ compiler and broke the cache.
How does it enforce that? Does it sandbox the compiler or something? Java compiler annotation processors can perform arbitrary IO.
There's also (I believe experimental) support for using Docker as a sandbox backend. That ends up being useful if you're using the remote build execution support: you can run a build locally in exactly the same environment it will run on a build farm.
(I once accidentally baked a time and random number into a binary myself - and the random number thing would have been a serious security bug if we had not found it with test coverage. Would have been better if the build system disallowed it.)
They already have support for build times and labels, but you can add arbitrary stuff like got hashes too
You can of course define these as variables, but that'll just drop all your caches on each build which eliminates the main selling point of Bazel.
Bazel can use sandbox as an additional layer to prevent you from accessing undeclared inputs, but the sandbox is optional and can be turned off.
The Java ecosystem has much of that available also though - even a large Java-only project is somewhat tough to justify moving away from the tools almost every developer will arrive already knowing.
This is how I feel about most of Googles Java libs (guice, guava, gson). They used to be better, but now they're just tech debt.
Guice -> If you need to support the jakarta.* namespace the only real answer is Weld or another CDI compliant implementation. Otherwise guice is probably still fine.
GSON -> Jackson as always.
JetBrains and it seems basically every library has support for Gradle and sometimes Maven, but not Bazel. Gradle is verbose and has a really steep learning curve, but it's battle-tested and does its job really well. I don't really see why anyone would use Bazel instead.
Gradle: Just slap any kind of code into our build files and we'll run it. Everything is code, everything is customizable and you can muck around with everything.
Bazel: Here are a few of build rules. All compilation needs to be deterministic, all tasks need clear deterministic outputs and all task generate clear deterministic outputs. It's very rigid and it uses that to get significant performance wins, especially when using a build farm.
A gradle build can be very complicated and very long.
Bazel is relatively easy to learn and relatively concise (again that's relative to Gradle which for a Java dev basically requires you to learn 1-2 new languages if you want to understand the build file syntax), and IMO is MUCH better if you are working with multiple languages.
However I haven't tried to use Bazel with things like native libs in the JVM... so it's possible that my Gradle opinion is biased because I used it with more complicated use cases.
One thing that Bazel has that I wish gradle did was remote compilation and remote test execution rather than just remote caching of artifacts.
* SNAPSHOT maven artifact are not hermetic and not reliable for caching: Maven only re-fetch SNAPSHOT once a day by defaults, so you will miss changes if you do not explicitly update it. In Bazel, any changes in a BUILD target's input are detected immediately and will trigger a rebuild.
For builds there are a few options to check out: https://github.com/jin/awesome-bazel#remote-caching-and-exec...
So how does this work with maven? Does it just make one huge bazel module that gets recompiled every time? I'd be interested in some more details in the readme.