Hacker News new | past | comments | ask | show | jobs | submit login
Exodus: Easily migrate your JVM code from Maven to Bazel (github.com/wix)
70 points by zdw 4 days ago | hide | past | favorite | 75 comments





I know nothing about Bazel, but I've never understood the JVM community's obsession with getting off Maven.

My experience of Gradle is Gradle projects require a PhD in Gradle AND intimate familiarity with the tastes of whoever implemented that particular project .gradle files likes to use it. Is Bazel like that too?

When we inherit codebases from other teams, often one of the first things we do is convert it from Gradle to Maven.

I honestly couldn't care less about how flexible and powerful Gradle or Bazel or some other build tool is compared to Maven. Maven is simple, the same in every project (this is huuuuuge), just works and gets out the way.


Maven is one of the most stable and "good enough" tools there is -which is why a significant part of software engineers hate it.

If you enjoy using gradle for its customizability and flexibility and power -you are doing it wrong. You have created an overengineered monster I would dread to work with. I don't want to learn groovy, I want to develop applications.


> which is why a significant part of software engineers hate it.

i used to hate maven - i was a big fan of Ant. But as time went on, i saw how useful maven was. Being _exactly_ the same in every project, requiring the same project file structure, requiring the same build steps, and the same build commands, etc.

People who own projects seem to believe their project's build is special and require custom stuff to make it work. It rarely is actually the case. Just shoehorn your project into maven.


> […] the same in every project (this is huuuuuge) […]

Hard to overstate how useful that is. Configuring something in a Maven build may be verbose or sometimes hard, but most of the time the build just does its thing, and all developers know at least `mvn clean install`.


One of the main goals of the project was “convention over configuration” and it succeeded well at that.

> My experience of Gradle is Gradle projects require a PhD in Gradle AND intimate familiarity with the tastes of whoever implemented that particular project .gradle files likes to use it. Is Bazel like that too?

This, I was quite happy to try out gradle when it appeared and when I read few tutorials it came to be: This is just Ant all over again, but this time instead of xml I get groovy.

Maven does one thing great: It forces you to use their build steps and every project has exactly the same steps.

Gradle is like Ant or CMake.


Exactly. You could have a million line pom.xml and you'd still most likely just need `mvn clean package` to get a usable JAR. XML is a nasty format but it just works...

Maven just hides the complexity in its plugins which are implemented in Java.

If you don't need any plugin - then, yes, it's simple.

But once you need specialized plugins it's no better than gradle/bazel. The latter two lets you add some logic in the build scripts - so at least it's visible to users. If you need special functionality in Maven you have to write a plugin or gather a few plugins. And when it breaks down (as all tools do) -- it's difficult to find the code and understand the problems.


Writing Maven plugins is quite easy and doesn't require you to learn another language (like in case of gradle, you have to know groovy).

At least with Groovy, most Java programs are also valid groovy programs (the converse is not true, but not too different).

Yes, but with groovy you get magnitude slower compilation time, less efficient autocomplete.

And a language that has sometimes strange design choices.

Groovy is like PHP, if I need to upgrade from Java it would be Scala. At least I get a superior language for also magnitude slower builds.


Gradle has a Kotlin Script build script option these days too. I haven't tried it yet, so I'm not sure if it's any better.

> Maven is simple, the same in every project (this is huuuuuge), just works and gets out the way.

Recently tried to find the correct and canonical way to set up a new, small spring boot project for a customer. No one could tell me how: "we just use spring initializr and copy the relevant files" From where? Uh, oh. So I tried to find a canonical solution. Did some research. At least 14 variants..! No clear winner.

Nobody really understands it it sees but at least they don't have to use the dreaded Maven.

(For those who don't know, Mavens main disadvantage is that it is written in xml. Since Maven files frequently gets over 30 lines long - especially if people don't know what they are doing - and (parent projects) needs to be updated twice a month on average because of updates to dependencies this is a living nightmare to some people ;-)

Might contain traces of both sarcasm, irony, exaggerations and other literary tools. Those who get it, feel free to laugh. Those who don't: After ~15 years in the field I've not seen anything working as well as Maven on the kind of projects and teams I work on and I also don't think I've seen any other project getting as much undeserved flak.


> Mavens main disadvantage is that it is written in xml

I would say that this is Maven's most obvious disadvantage. Its main disadvantage is its inflexibility.

I'm working on a little library right now that is sort of a facade for two possible backend libraries. Some code is shared, some depends on backend A, some on backend B. I want to separate the three kinds of source code, compile and test each against the correct dependencies (none, A, or B), and then put all the class files in one jar file.

In Gradle i would define some extra configurations for the A and B sides, then customise the standard jar task to include their output.

What would i do in Maven?

My Maven knowledge is pretty out of date, but i think the Maven answer would be "don't do that, create three modules and build and publish three artifacts". Or, to put it another way "our tool can't do that, so you should make life worse for your users, but we're going to insist that our way is actually the proper way to do it, for mysterious quasi-mystical reasons".


> What would i do in Maven?

Typically customization of maven goals is done via profiles. You can modify the configuration of the plugins depending on which profile is active. Though I'm obviously not sure if it will solve your problem without knowing exact details.


> My experience of Gradle is Gradle projects require a PhD in Gradle AND intimate familiarity with the tastes of whoever implemented that particular project .gradle files likes to use it.

In the C++ world we have exactly the same problem with CMake.


> In the C++ world we have exactly the same problem with CMake.

"Modern cmake" (i.e., target-based projects made possible since the release of cmake v3.0) is terribly easy and straight-forward, provided that your devteam understands the need to stay in their lane.

project(), add_executable()/add_library(), target_include_directories(), set_target_properties(), and add_directory(). This gets any MVP working, and any project 95% where they need to be.

Add in set() to define helper cars like proj_SOURCES, proj_HEADERS, and proj_INTERFACES, and option() to get fancy cacheable settings.

Cmake only becomes a problem if devteams have no idea on how to write simple code, and can't help themselves turning any hello world into a big bowl of spaghetti code. You don't need functions or macros or anything at all, unless you want to waste time feeling smart for writing towers of hanoi in cmake, or shoving square pegs into round holes.

In spite of having about a decade of experience with cmake projects under my belt, the only place I ever defined functions in cmake was to add boilerplate code to add unit tests to crest by just passing the source file name, and even that was not worth the trouble.

You can write stuff as convoluted and unmaintainable as you'd like, but that's on you, not the tool.


> You can write stuff as convoluted and unmaintainable as you'd like, but that's on you, not the tool.

In this case, it's also on the tool. CMake is a bad tool. Unfortunately it's the least bad one available for that task. I've rambled on this topic before. [0][1]

My most recent adventure with CMake was with its SWIG support. This was as painful and error-prone as everything else in CMake, and I've used CMake for years.

[0] https://news.ycombinator.com/item?id=24203172

[1] https://news.ycombinator.com/item?id=24565266


> In this case, it's also on the tool. CMake is a bad tool. Unfortunately it's the least bad one available for that task. I've rambled on this topic before. [0][1]

I completely disagree. Cmake is an exemplary tool which achieved the status of de-facto standard due to its design and simplicity.

Queue in the popular saying about how bad workers always blame their tools. If a worker does awful things, that's a reflection of how bad the worker is, not the tools.

Case in point: complaining about cmake because of its rudimentary BASIC-like scripting language is a huge red flag. If your idea of using cmake is to write scripts, you're either aiming to do something extremely exotic and exceptional, like adding macros that magically support frameworks like Qt or conan, or I dare say you do not know what you are doing. At all. I repeat: you only need to call about half a dozen cmake functions to declare build targets and define it's properties, and you've already met over 95% of any project needs. If instead you're in the business of mangling makefiles with BASIC you should really take a step back and reconsider what you're trying to do because odds are you're doing it very wrong.


> achieved the status of de-facto standard due to its design and simplicity

CMake is miserable to work with, but has excellent cross-platform support, able to target many build systems and IDEs. It also has a wealth of package-detection libraries. It was first-to-market with this combination of virtues, and it's difficult for competing systems to beat CMake on those terms. These are much of the reason I continue to use CMake, the other main reason being that fragmentation is harmful in itself.

> Queue in the popular saying about how bad workers always blame their tools. If a worker does awful things, that's a reflection of how bad the worker is, not the tools.

You seem to be suggesting there can be no such thing as a poor tool.

When someone expresses dissatisfaction with a tool, and lays out numerous specific complaints born of their experience, that does not entitle you to imply they must just be incompetent, which is what you've done. You might refrain from borderline personal insults in future.

You've not substantiated the If a worker does awful things premise. I've written several well-functioning CMake scripts, always following the current best practices. My point isn't that it's impossible to do this, but that it's painful to achieve.

> complaining about cmake because of its rudimentary BASIC-like scripting language is a huge red flag

Writing scripts is the only way you can use CMake.

> If your idea of using cmake is to write scripts, you're either aiming to do something extremely exotic and exceptional, like adding macros that magically support frameworks like Qt or conan, or I dare say you do not know what you are doing.

Again, CMake is always script-driven.

Anyway, this response is misguided. The CMake scripting language is not intended only for small and simple scripts. I already addressed this point last time. [0] CMake package-detection scripts are enormous, and fail to follow any kind of standard pattern. Just look at this official first-party CMake script and tell me this is how it ought to look. [1] It's a trainwreck.

> you only need to call about half a dozen cmake functions to declare build targets and define it's properties, and you've already met over 95% of any project needs.

If you make the tiniest error anywhere along the line, you will get mysterious failures rather than helpful error messages. You might even get failures which only affect certain platforms. I've encountered this several times, as CMake fails to properly abstract away all the differences between the Windows and Unix build+link models, especially regarding shared libraries. Once you've got your script working, it will look as if CMake properly abstracts away these differences, but the existence of such failure modes belies the illusion.

A good metabuild system would aspire to make invalid states unrepresentable, or would at least help defend against common mistakes. CMake doesn't even try. It's fragile footguns all the way down.

A recent example of a failure mode that simply wouldn't happen in a better system (I mentioned another previously at [2]): I'm seeing CTest passing the string WORKING_DIRECTORY to my test script, as a command-line argument. Presumably it's incorrectly handling the WORKING_DIRECTORY marker in the argument list of add_test. Am I doing something wrong? Perhaps, but not that I can tell. Presumably this failure mode is only possible because of CMake's awful approach to parameter-handling, and would not have occurred in a well-designed system.

> If instead you're in the business of mangling makefiles with BASIC you should really take a step back and reconsider what you're trying to do because odds are you're doing it very wrong.

You're blindly assuming I'm misusing CMake. When I write CMake scripts, I always try to follow the current best-practices. Again, I do not appreciate you assuming that I have no idea what I'm doing simply because I disagree with you.

[0] https://news.ycombinator.com/item?id=24567450

[1] https://github.com/Kitware/CMake/blob/master/Modules/FindIma...

[2] https://news.ycombinator.com/item?id=24203172


Maven issues:

XML. Ugh, the XML.

The backing company was a bit too eager to sell artifact repository software.

The lingo was pointlessly obscure (coordinates? artifacts? "project object model"?)

It didn't offer a couple key options (local jars without an artifact repository) so it was too inflexible.

Gradle did fly off the handle too quickly, breaks wayyy too much stuff with each release, has way too many releases and examples that don't work anymore... but on the other hand you can usually cobble together what you want from a couple sample builds and some stackoverflow.

What the JVM community probably needed was a YAML-based Maven with a bit more flexibility than maven provided.


Also, IDE integration works a lot better. IntelliJ with Maven is a joy, with Gradle its a dog-slow shit-show.

Gradle have sane multi project management. Maven is very strange in that regard. Another issue with Maven is kind of hacky configuration for Kotlin build. For single project Java build Maven is my favourite tool.

Bazel's selling point is that it's designed to be integrated into various pipelines, handling complexity (multiple languages, platforms, targets, etc) AND being fast (cache what you can, so incremental builds are first class things).

Of course under the hood it's just a very complicated 'make', like every build tool is.

This likely just means unnecessary complexity for a simple Java project where the development just happens in IDEA (or Eclipse or even NetBeans) and the IDE builds and checks everything, and ... at release time someone just builds a package and it's done.

But in case of a project with a lot of services, dependencies, there's arguably some advantage to setting up Bazel to help with development. (Though the build/bzl files all look ugly and unintuitive, a bit like CMake :[ )


Does it encourage every team and every dev to create their own custom config that takes two hours to figure out like Gradle, or is it more like Maven where you only specify anything that isn't bog standard?

It's very standardized. I don't have any experience with using it with things outside a Java binary, but for that use case you basically have no choice but a number of java_library rules and a java_binary.

E.g., a simple project with no outside deps might be like (and I typed this out on my phone):

java_binary(name = "grep_bin", srcs = ["grep_main.java"], deps=["grep_lib"])

java_library(name="grep_lib", srcs =["grep_lib.java"])

Things get more opinionated when you pull in outside libraries, but even then bazel makes it fairly painless and integrates with other build systems where needed.

Take a look at the docs here for more info: https://docs.bazel.build/versions/master/be/java.html


No, it's actually the opposite - it has very rigid build rules with rigid inputs and outputs so the toolchain can reason about dependencies and it doesn't really allow adding arbitrary execution in the BUILD files to prevent code messing up caching and remote builds.

It was build to solve a problem of building software at scale across multiple languages, platforms in a single monorepo. And it's great at that. It becomes limiting as your use-case steps away from this "monorepo, across languages" approach.


It has support for many languages (called rules [0]) and it has support for many tools for that specific language ecosystem.

For example the Scala rules have specs2 (a unit testing lib/harness/framework/thing) [1]

I have no idea how this all works in practice.

[0] https://github.com/bazelbuild/bazel/blob/master/site/docs/ru...

[1] https://github.com/bazelbuild/rules_scala/blob/master/exampl...


Yeah, my first thought "another build manager to worry about" - it's already annoying that some projects use Gradle, which I don't want to have to learn.

“ but I've never understood the JVM community's obsession with getting off Maven”

XML - if for whatever reason you decide to make a good product unpopular, just add XML


Seriously tried adopting Bazel on a team once and never again. Its a huge headache and requires a dedicated engineer resource to be available to assist/debug/teach when inevitably someone wants to do (or does) something outside the structure and expectations of Bazel. Every so often I try again on a personal project and realize im spending more time futzing with Bazel than writing code.

Its a tool that’s perfect for Google.


An alternate voice here: it took a long time for us to get Bazel (and there's still a lot of headbanging on stuff) but if you want fast CI times, a huge test suite, and not have to deal with like 20 repositories (especially if you want cross-cutting PRs) it's basically the only reasonable tool that will get you there. And yeah, you gotta have someone spend a lot of time with it (maybe comparable to futzing about with some JS webpack spaghetti-code... thing).

I think if you're _not_ using JS (and to a lesser extent, Python), it feels like a no-brainer. Unfortunately it's super oriented around getting build artifacts, which doesn't work so well with Python/JS so you're going to have to fight stuff a bit (though the Bazel Slack channel is filled with pretty helpful people).

Overall I've gotten way madder at the other tooling for being "wrong" more than Bazel itself (lots of people really do random stuff when faced with symlinks for example).

I don't know what the "right" amount of time is, but if you current have 30min+ CI times, this can cut stuff down way more and get you much closer to a nice CI/CD flow.

The thing that kinda kills me is that it feels purposefully obtuse at time (in particular about obsfucating the custom rules stuff, despite it being _the_ way to wrap existing projects nicely). But all the Xooglers who end up writing Bazel clones somehow end up with even more obtuse tools...


It's a build framework like gradle but with python instead of groovy. It's extremely powerful, just like gradle is, so it requires considerable time to get comfortable with it. I never regretted spending the time with gradle as my builds get more and more flexible and good in time. I don't think i will ever push to bazel though, as to me it looks like gradle with a python dsl.

To ask the obvious unanswered question: Why would someone want to move to Bazel from Maven? What advantages does it bring?

For me, the main advantage of Bazel is speed, when properly structured and configured, a Bazel build can:

* Perform incremental build a magnitude faster than Maven, zero changes build finishes immediately.

* Can have global caches for all builds across organization, so that a workstation build can automatically build incrementally upon a standard build, massively reduces build speed for big monorepo.

* Very fast, incremental tests execution: test results are cached, so only tests affected by the changed code are run. Test executions are parallel by default.

Some other things:

* Pull in dependencies directly from a git repository, without having to publishing it first like in maven, this can be a plus or a minus.

* Cross-languages build: you can have each of your module programmed in a different language, declare dependencies between them and use Bazel to perform incrementally build quickly and reliably.

That said, maintaining a Bazel build system is also a magnitude more complex than a Maven one, especially if you're not using one of Google main language: Java/C++. Plugins for other languages are of varying quality.


...and a lot of those arguments seem geared toward big codebases. Small micro-services, such as Spring Boot, with the bulk of the test execution going to @SpringBootTest tests, where parallelization has its drawbacks, may not profit from most of those "benefits".

Fun fact, it seems like Maven was also meant to service a variety of languages. There's even some support for polyglot POM files. Aside from the niche C++ or JavaScript ("frontend") plugin, that never did much.


Actually we have seen a ton of success using it for spring boot microservices - each microservice has its own BUILD file within the monorepo. Test execution in spring can be parallelized as needed (and not used if not needed!) but i find our tests run a lot faster under bazel due to the caching anyway

Can we agree that "it depends"? If I have a resource-starved build runner (looking at you, there, Bitbucket Pipelines), and I have a good handful of parallel integration tests, each spinning up a few Docker containers worth of dependencies, I will end up troubleshooting out of memory situations. I also don't think you get to leverage application context caching, for parallel test execution, and if you did, I'd be concerned about unpredictable conflicting state between those integration tests. If I can already run my tests for a single microservice in about two minutes, the parallelization just isn't worth it to me (in CI). And, oh, I'm not in a monorepo.

> Perform incremental build a magnitude faster than Maven, zero changes build finishes immediately.

I see benefits of that in projects that build hours, but in every one of my maven projects (non-hobby) I always do clean before package/verify just to make sure the project builds.

``` mvn clean verify ```

And I'm certain it will work in CI also.


People tend to run ```mvn clean verify``` because Maven caching is unreliable, and lead to build failures.

However, Bazel cache is a lot more robust, so you almost never do clean build. The only time I had to perform a clean build was when I accidentally upgraded my C++ compiler and broke the cache.


Given that Google has a Common Lisp codebase, I wonder...do they also have Bazel plugins for Common Lisp?

Yes

Is test result caching only possible due to a fundamental design of Bazel, or could you modify for example the maven surefire plugin to achieve the same?

Bazel guarantees that each build step depends only on its declared inputs, and that each step is deterministic. So if the inputs to a step have not changed, Bazel can skip that step and use a cached copy of the result of that step. If you break up a large multi module build into a tree of small steps, Bazel can perform an incremental build after a small change in a few minutes, even though running the entire build from scratch would take hours.

> Bazel guarantees that each build step depends only on its declared inputs

How does it enforce that? Does it sandbox the compiler or something? Java compiler annotation processors can perform arbitrary IO.


Exactly that: https://docs.bazel.build/versions/master/sandboxing.html

There's also (I believe experimental) support for using Docker as a sandbox backend. That ends up being useful if you're using the remote build execution support: you can run a build locally in exactly the same environment it will run on a build farm.


I wonder if this stops you reading the clock and random number sources during build?

(I once accidentally baked a time and random number into a binary myself - and the random number thing would have been a serious security bug if we had not found it with test coverage. Would have been better if the build system disallowed it.)


You can actually input system stuff like this via: https://docs.bazel.build/versions/master/user-manual.html#fl...

They already have support for build times and labels, but you can add arbitrary stuff like got hashes too


It can't stop you doing dumb things in code (e.g. use #pragmas or variables in C++ code) which will probably make your build behave funny, because it'll cache outputs not knowing that they're supposed to change.

You can of course define these as variables, but that'll just drop all your caches on each build which eliminates the main selling point of Bazel.


I don’t see Windows in supported systems for sandboxfs. I guess that would make bazel not suitable for most developers.

You have to declare all input explicitly, including your compiler and Bazel will monitor them: Any changes in input files, compilers or even some environmental variables like $PATH will trigger a rebuild.

Bazel can use sandbox as an additional layer to prevent you from accessing undeclared inputs, but the sandbox is optional and can be turned off.


Maven does exactly that as well. The term is "artifacts" in Maven.

Most plugins do not though, and will happily do stuff like run static analysis or lint rules over code that hasn’t changed from the last build 30 seconds ago.

To me, the strongest motivation for Bazel arrives with a complex (slow) build spanning multiple technology ecosystems. Bazel does a great job building all of them, without privileging one over all the others. It has all the key features you'd expect, dependency graph control, caching, distributed computation, etc. You can add support for whatever local complexities invariably arise with custom rules.

The Java ecosystem has much of that available also though - even a large Java-only project is somewhat tough to justify moving away from the tools almost every developer will arrive already knowing.


It came from Googlers, who are kind of notorious for not using the rest of the industry’s tools. Sometimes it’s because they pioneered the problem space, and their own tools really are that much better. Sometimes their tools used to be better but are now just tech debt. Sometimes it’s promo-driven development.

> Sometimes their tools used to be better but are now just tech debt

This is how I feel about most of Googles Java libs (guice, guava, gson). They used to be better, but now they're just tech debt.


What are the best-in-class replacements for those?

Guava -> The JDK Std library, unless you need more performant collections provided by it.

Guice -> If you need to support the jakarta.* namespace the only real answer is Weld or another CDI compliant implementation. Otherwise guice is probably still fine.

GSON -> Jackson as always.


Also, how does Bazel compare to Gradle?

JetBrains and it seems basically every library has support for Gradle and sometimes Maven, but not Bazel. Gradle is verbose and has a really steep learning curve, but it's battle-tested and does its job really well. I don't really see why anyone would use Bazel instead.


I think they're fundamentally opposed in their philosophy.

Gradle: Just slap any kind of code into our build files and we'll run it. Everything is code, everything is customizable and you can muck around with everything.

Bazel: Here are a few of build rules. All compilation needs to be deterministic, all tasks need clear deterministic outputs and all task generate clear deterministic outputs. It's very rigid and it uses that to get significant performance wins, especially when using a build farm.


For the most part, I think you've got it.

A gradle build can be very complicated and very long.

Bazel is relatively easy to learn and relatively concise (again that's relative to Gradle which for a Java dev basically requires you to learn 1-2 new languages if you want to understand the build file syntax), and IMO is MUCH better if you are working with multiple languages.

However I haven't tried to use Bazel with things like native libs in the JVM... so it's possible that my Gradle opinion is biased because I used it with more complicated use cases.


The Bazel plugin for IntelliJ works rather well - I’ve used it for Python and Go quite extensively and it “just works”. I assume this is the case for Java too (though have not verified). The main issue is that releases often lag IntelliJ releases by a not-insignificant amount since they’re (I think) done by Google and not JetBrains.

Correct. That plugin is developed by Google primarily as internal dev tooling, and the version of IntelliJ supported internally is generally not bleeding edge.

Gradle’s remote build cache which caches at the task level is nice and granular. Also we’ve never had a problem with a change requiring a rebuild not being detected.

One thing that Bazel has that I wish gradle did was remote compilation and remote test execution rather than just remote caching of artifacts.


You get hermetic and cacheable builds, but only if you’re willing to abandon the whole Maven plugin ecosystem and handle a lot of codegen yourself.

Bazel does embrace code gen, but to be honest I've found rules for most of the code gen I've needed to do and they plug in nicely.

I still don't understand, Maven artifacts are hermetic and cacheable, no?

* Bazel offer much more granular caching, because a Bazel BUILD target is often much smaller than a Maven artifact. For example: a Java BUILD target is often just a single package.

* SNAPSHOT maven artifact are not hermetic and not reliable for caching: Maven only re-fetch SNAPSHOT once a day by defaults, so you will miss changes if you do not explicitly update it. In Bazel, any changes in a BUILD target's input are detected immediately and will trigger a rebuild.


Bazel rules are supposed to only read specified inputs and write specified deterministic outputs, and they’re skipped if nothing changed. Maven plugins can do whatever they want, and you decide whether to publish what's on disk.

Distributed builds, and even distributed cache of build outputs. At google scale a distributed cache means you don't even need to build anything except the smallest code unit that changed--your CI systems, etc. have already built and warmed the cache for almost any other thing you want to build.

I thought that the distributed part wasn't opensourced.

For the cache: https://docs.bazel.build/versions/master/remote-caching.html and https://github.com/buchgr/bazel-remote You can actually just set it up to use a shared S3 bucket to hold the cache.

For builds there are a few options to check out: https://github.com/jin/awesome-bazel#remote-caching-and-exec...


That all looks like a massive security vulnerability. Anyone with write access to the cache can upload a malicious build artifact and end up getting it compiled into everyone else's binaries, effectively giving them access to act as any developer or the ability to get their code sneaked into a release undetected. The page doesn't describe that risk at all!

I read what you meant, not what you said, had the same question.

With Maven you can have circular dependencies within a module, whereas Bazel IIRC gets its speed from having a graph of small modules where only modules that change get rebuilt, but which places strict requirements on circle-free code.

So how does this work with maven? Does it just make one huge bazel module that gets recompiled every time? I'd be interested in some more details in the readme.

Also FWIW bazel is the only official way to use j2cl to compile java to javascript.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: