The problem in enterprise tech like you found is that one is forced to use certain, non-productive old frameworks filled with legacy, over-engineered bloat.
Modern Java micro-services in contrast are really fast to develop. Green field Java projects where one can make personal choices of lean technology and libraries are simply amazing. They also have terrific characteristics under load. The JVM performs well even with bloatware. When you trim out and make your JAR's lean and mean then its utterly amazing.
This attitude is a bit surprising to me. The main point of micro-services is that they address a complexity problem when dealing with large organizations. That is, they allow a large organization to break into small teams that can work (relatively) independently so each team can iterate faster. However, microservices definitely make a host of issues harder:
* cross-service transactions are much harder
* cross-cutting concerns can be more difficult to change.
* can add organizational complexity of a feature you want to add needs corresponding changes it up or downstream services.
Even Martin Fowler has this quote:
Don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.
Micro-services allowed us to break this. Micro-services also allowed us a faster turnaround time in feature delivery and reliability. It is also easier to isolate problems.
Generally stuff in the monolith that are already abstracted by large service facades are a good candidate for a separate service with their independent data model. Avoid cross-service transactions completely. If you have a cross-cutting concern, it generally means you need a separate service managing that cross-cutting concern.
Organisational complexity is definitely increased. This can be mitigated by tooling. Our build pipeline shows the full graph dependency chain, what is built, what is getting built, what has been deployed, etc.
We have the concept of a "system" that is basically a versioned set of micro-services running off a build trigger. We developed the capability to namespace systems - ie each system of services uses separate resources (kafka/db/etc) and separate URL's (via custom domains) when deployed on our cloud platform. You can also "plugin" your micro-service into a targeted system for diagnostics. This way dev, testing and product demo teams can work independently. The latter is not micro-service best practice, but in a large, slow-moving organisation, we found it valuable.
We also have a distributed dev team, but decided against using microservices. Instead, we have a monolith with a plugin architecture, so remote teams can just add independent modules to add functionality. Occasionally changes are made to the monolithic application, and these changes are heavily scrutinized. The plugin architecture provides many of the benefits of a microservice, while also allowing for more flexibility on occasion, and eliminates flaky network calls that are inherit to microservices.
I agree with all the issues you've stated, but I'd like to add one more. Microservices arguably make it much easier to build systems with circulate dependencies leading to weird race conditions and deadlocks.
Consider two units of code A and B. If implemented as classes, modules, or libraries, it's relatively easy to spot and prevent A calling into B which in turn calls back to A. Sometimes the compiler and tools can automatically catch that.
With microservices, catching dangerous dependencies like this are much more difficult as each service, outwardly, seems independent of the other and there are few tools to catch these dependencies.
We had a pipeline consisting of microservices and Kafka topics. Simple if/then logic quickly became problematic so I implemented our flow control as a directed acyclic graph, and it helped tremendously.
It's also easy to render your graph out with any number of visualization tools to quickly understand/validate work flows.
I don't see how graph data structure solves this problem. Suppose you've created a photo sharing app. One microservice, A, has the graph database and stores the photos. One service, B, uploads and downloads photos. And a third service, C, applies filters to photos.
It's pretty easy to architect these services such that the download service B uses the filter service C in some situations, and the filter service C uses the download service B in others. This is obviously a bad design, but with microservices it's easier to make these bad design choices because the folks who wrote one service have little information about the other.
A tool to "fix" your example probably doesn't exist, but a graph is an excellent way to represent dependencies, reason about progress in the flow, and enforce constraints in a generic way.
I think people are just spinning up a bunch of unorganized services and calling them micro and then complaining about it.
If you're on a small team that can build a clean monolith, the work to make them microservices is imo pretty trivial.
It's still not comparable with Rails. Let alone some niche bleeding edge technologies which are focused on productivity.