Hacker News new | past | comments | ask | show | jobs | submit login
Monoliths Are the Future (changelog.com)
1065 points by feross 18 days ago | hide | past | web | favorite | 552 comments



I couldn't agree more with an article.

Most people think a micro-service architecture is a panacea because "look at how simple X is," but it's not that simple. It's now a distributed system, and very likely, it's a the worst-of-the-worst a distributed monolith. Distributed system are hard, I know, I do it.

Three signs you have a distributed monolith:

1. You're duplicating the tables (information), without transforming the data into something new (adding information), in another database (e.g. worst cache ever, enjoy the split-brain). [1]

2. Service X does not work without Y or Z, and/or you have no strategy for how to deal with one of them going down.

2.5 Bonus, there is likely no way to meaningfully decouple the services. Service X can be "tolerant" of service Y's failure, but it cannot ever function without service Y.

3. You push all your data over an event-bus to keep your services "in-sync" with each-other taking a hot shit on the idea of a "transaction." The event-bus over time pushes your data further out of sync, making you think you need an even better event bus... You need transactions and (clicks over to the Jepsen series and laughs) good luck rolling that on your own...

I'm not saying service oriented architectures are bad, I'm not saying services are bad, they're absolutely not. They're a tool for a job, and one that comes with a lot of foot guns and pitfalls. Many of which people are not prepared for when they ship that first micro service.

I didn't even touch on the additional infrastructure and testing burden that a fleet of micro-services bring about.

[1] Simple tip: Don't duplicate data without adding value to it. Just don't.


We've moved a lot of services into Kubernetes and broken things up into smaller and smaller micro-services. It definitely eliminates a lot of the complexity for developers ... but you trade it for operational complexity (e.g. routing, security, mis-matched client/server versions, resiliency when dependency isn't responding). I still believe that overall software quality is higher with micro-services (our Swagger documents serve as living ICDs), but don't kid yourself that you're going to save development time. And don't fall into the trap of shrinking your micro-services too small.


The big trade off is the ability to rewrite a large part of the system if a business pivot is needed. That was the bane of the previous company I worked in, engineering and operations was top notch unfortunately it was done too soon and it killed the company because it could not adjust to a moving market (ie customer and sales feedback was ignored because a lot of new features would require an architecture change that was daunting). It was very optimized for use cases that were becoming irrelevant. In my small startup where product market fit is still moving I always thank myself that everything is under engineered in a monolith when signing a big client that ask for adjustments.


You’ll have endless race conditions to deal with, even when storage is central and unique.

We learned and is continuing learning that.


Unique storage for multiple services sounds like a recipe for disaster. The purpose of splitting services, at least one of, is to decouple parts of the code at a fundamental level, including storage and overall ownership thereof. You're probably better served with a modular monolith if you really can't break storage up.


No, only one service is reading/writing, everything else just call that. Still, things get quite lost when it involves talking to multiple other teams and needing to keep everything in sync.


Ok, but then what's the point of splitting it in the first place? The way I see it is to split your domain so that a team owns not only the code, but also the model, the data, the interface and the future vision of a small enough area. If a service owns all the data, then someone who needs to make any change is bottlenecked by it and they would need knowledge beyond their domain. So the key is defining the right domains (or domain boundaries). Unfortunately most people just split before thinking about the details of this process, so the split will sooner or later hit a wall of dependencies.


We need synchronous work flow and then asynchronous workflows. That was the primary reason. Now that doesn't mean it must split, but since we're running on multiple hosts anyway it wasn't hard to split off the asynchrounous functions to another batch.


> And don't fall into the trap of shrinking your micro-services too small.

^ this

I think the naming decision of the concept has been detrimental to its interpretation. In reality, most of the time what we really want is a"one-or-more reasonably-sized systems with well-enough-defined responsibility boundaries".

Perhaps "Service Right-Sizing" would steer people to better decisions. Alas, that "Microservices" objectively sounds sexier.


> It definitely eliminates a lot of the complexity for developers

We're currently translating a 20 year old ~50MLOC codebase into a distributed monolith (using a variety of approaches that all approximate strangler). I have far less motivation to go to work if I know that I will be buried in the old monorepo. I can change, build and get a service changed in less than an hour. Touching the monorepo is easily 1.5 days for a single change.

We seem to be gaining far more in terms of developer productivity than we are losing to operational overhead.


Sorry ... I should have said "don't kid yourself that you'll save time" instead of developer time. We do indeed have a faster change cycle on every service which is a win even if we're still burning (in general) the same number of hours over the whole system.

I also should have mentioned that it's definitely more pleasant for those in purely development roles. Troubleshooting, resiliency and system effects don't impact everyone (and I actually like those types of hard problems). I'd also suggest that integrating tracing, metrics, and logging in a consistent way is imperative. If you're on Kubernetes, using a proxy like Istio (Envoy) or LinkerD is a great way to get retries, backoff, etc established without changing coded.

Finally, implementing a healthcheck end-point on every service and having the impact of any failures properly degrade dependent services is really helpful both in troubleshooting and ultimately in creating a UI with graceful degradation (toasts with messages related to what's not currently available are great). I have great hopes for the healthcheck RFC that's being developed at https://github.com/inadarei/rfc-healthcheck.


That’s an encouraging story to hear. The thing I’ve noticed is that the costs of moving a poorly written monolith to a microservice architecture can be incredibly high. I also think that microservice design really needs to be thought through and scrutinized, because poorly designed microservices start to suck really quickly in terms of maintenance.


And also trading off with how easy it is to understand the system. If you have one monolith in most cases it's a single code base you can navigate through and understand exactly who calls who and why.


I totally agree. Mostly we are doing microservices the wrong way. We are not drawing the boundaries correctly, they are too small, they have too many interdependencies, they don't really encapsulate the data, and you end up with many interdependencies. There is not enough guidance about sizing them. We are just building distributed monoliths. Which is great for cloud companies because they get to sell many boxes.


Micro-services are just connected things which work together to accomplish something. But where are those connections described? In some tables somewhere. Maybe.

Whereas if you write a single monolithic program its connections are described in code, preferably type-checked by a compiler. I think that gives you at least theoretically a better chance of understanding what are the things that connect, and how they connect.

So if there was a good programming language for describing micro-services then that would probably resolve many management difficulties, and then the question would be simply do we get performance benefits from running on multiple processors.


There is such a language, and it is Erlang.


Literally this. Reading the article I kept thinking to myself "which is why Erlang/Elixir is great because it doesn't make you choose up front". It's wild that with how popular Elixir has gotten that it still isn't seen as a serious contender for many companies


I write Elixir professionally, Erlang/Elixir and BEAM is no one-stop solution to these problems either. It has tools to help you, but you can very easily end up in the same boat.


I have never tried Erlang, but I have read that it doesn't have static type checking. How does it guarantee that different services are following protocols?


Erlang is dynamically typed in part, I believe, to allow hot-swapping code in a running system. It relies on pattern matching to ensure code contracts / ie your types are more like guarantees that a pattern holds true, even if some of the specifics of that protocol have changed. Thus, with zero downtime, you can make updates to the system, while still knowing that your assertions about received data matching your protocol remains true. Any adapters can act like both type update and schema migration simultaneously, as in the case where you wish to support multiple versions of an API simultaneously.

The runtime system has built-in support for concurrency, distribution and fault tolerance. Because of the design goals for Erlang and it's runtime, you get services that can all run on one system or be distributed across a network, but the code that you actually write is relatively simple; the entire distributed system acts as a fault-tolerant VM that has functional guarantees.

If your startup node fails, then other nodes are elected. If a node crashes while in the middle of a method, another node will execute the method instead.

The runtime itself has some analogies to functional coding styles. It runs on a register machine rather than a stack machine. The call and return sequence is replaced by direct jumps to the implementation of the next instruction.


This is the relevant chapter in the manual: https://erlang.org/doc/reference_manual/typespec.html


> Simple tip: Don't duplicate data without adding value to it. Just don't.

Not much is said about S3's design principles in public, but that one was one of them.

Disclaimer: Recalling from memory.


I don't want advocate one way or another (micro vs. monoliths) because tomato tomato. However here are a few arguments in defense of microservices regarding these three signs you commented:

1. Microservices do not have some inherent property of having to duplicate data. You can have data in single source and deliver that data to anyone who needs it through an API. There are infinitely many caching solutions if you are worried about this becoming a bottleneck

2 and 2.5. There are tools for microservice architectures that counter these problems to a degree, mainstream example being containers and container orchestration (e.g. Docker and Kubernetes). One can even make an argument that microservices force you to build your systems so they are more robust than your monolith would be. If the argument for monoliths is that it's easier to maintain reliability when every egg is in one basket then you are putting all your bets into that basket and it becomes a black hole for developer and operations resources, as well as making operations evolution very slow

3. There are again tools for handling data and "syncing" (although I don't like the idea of having to "sync") the services, for example message queues / streaming processing platforms (e.g. Kafka). If some data or information might be of interest to multiple services, you should then push such data to a message queue and consume it from services that need it. The "syncing" problem sounds like something that arises when you start duplicating your data across services which shouldn't happen (see my argument on 1.)

Again not to say microservices are somehow universally better. Just coming into defense of their core concepts when they get unfairly accused


The trouble is that this process of streaming events over Kafka or Kinesis means that subscribed microservices will be duplicating the bus data in their own way in their local databases. If one of them falls out of the loop for whatever reason you are in trouble.

Now, there is a pattern called Event Sourcing (ES) which proposes that the source of truth should be the event bus itself and microservice databases are mere projections of this data. This is all good and well except it's very hard to implement in practice. If a microservice needs to replay all business events from months or years in the past it may take hours or days to do this. What about the business in the meantime? If it's a service that significantly reduces the usability of you application you effectively have a long downtime anyway.

Transactional activity becomes incredibly hard in the microservices world with either 2 phased commit (only good for very infrequent transactions due to performance) or with so called Sagas (which are very complex to get right and maintain).

Any company that isn't delivering its service to billions of users daily will likely suffer far more from microservices than they will benefit.


> This is all good and well except it's very hard to implement in practice. If a microservice needs to replay all business events from months or years in the past it may take hours or days to do this.

In my experience it's not hard to implement, but of course it depends on the problem domain (and probably also on not splitting things up willy-nilly because of the microservices fad). I think the key to event sourcing and immutability in general is to not overdo it. For example you will likely need to redact certain data (e.g. for legal compliance), so zero information loss is out. Systems like Kafka are a poor choice for long term data storage, the default retention is 1 week for a reason.

But the things that are wonderful about event sourcing (the ability to inspect, replay and fix because you haven't lost information) mostly materialize over a 1 week timeframe.

If you need to recover a lot of state from the event log you will need store aggregated event data at regular intervals to play back from to have acceptable performance. But in practice, in many cases the data granularity you need goes down as the data ages anyway, and you do some lossy aggregation as a natural part of your business process (as opposed to to deal with even sourcing performance problems). I.e. for the short timeframe kafka is the source of truth, but for the stuff you care long term it's some database, and this happens kinda naturally. So often you don't need to implement checkpointing.


You're right that micro-services avoid a lot of the pain of micro-services if they have one consolidated "Data service" that sits on top of their data repositories. But a micro-services architecture with a consolidated data service is similar to an airplane on the ground, it's true it can't fall out of the sky, but it's as useful as a car with awful gas mileage.

Once you add in this consolidated data service, every other service is dependent on the "data service" team. This means almost an change you make, requires submitting a request. Are their priorities your priorities? I would hate to be reliant on another team to do every development task.

Theoretically you could remove this issue by allowing any team to modify the data service, but then at that point you've just taken an application and added a bunch of http calls between method calls.

This same problem ops up with resiliency. If you have a consolidated data service, what happens if your data service goes down? How useful are your other services if they can't access any data?


I'm not following. How do things work better for the non-microservice approach?

Re. teams: For any project above a certain size, you'll have teams. If that's a network boundary, a process boundary, or a library boundary doesn't change that you'll have multiple teams for a large project.

I'm not sure I get the resiliency point. I worked on a project where the dependent data service was offline painfully frequently. We used async tasks and caching to keep things running and were able to let the users do many tasks. For us our tool was still fairly useful when dependencies went down. If we used monolith then everything would be down, right? That doesn't sound better.


> Re. teams: For any project above a certain size, you'll have teams. If that's a network boundary, a process boundary, or a library boundary doesn't change that you'll have multiple teams for a large project.

For sure, and one of the big selling points for micro-services is you can split those teams by micro service, with each team having an independent service they are responsible for. But when a big chunk of everyone's development is done on one giant service everyone shares you don't get the same benefits you would if the services were independent. Or put another way, splitting micro-services vertically can yield a bunch of benefits, but splitting them horizontally introduces a lot of pain with few benefits.

> I'm not sure I get the resiliency point. I worked on a project where the dependent data service was offline painfully frequently. We used async tasks and caching to keep things running and were able to let the users do many tasks. For us our tool was still fairly useful when dependencies went down. If we used monolith then everything would be down, right? That doesn't sound better.

I'm not saying to never spin off services. If you have a piece of functionality that you just can't get stable for the life of you, splitting it off into it's own service, and coding everything up to be resilient to it's failure makes a lot of sense. (I am very curious what the cause of the data service crashing was that you couldn't fix.)

But micro-services aren't a free lunch for resiliency. You're increasing the numbers of systems, servers, configurations, and connections which by default will decrease up-time until you do a ton of work. Not to mention tracking and debugging cross service failures is much more difficult than a single server.


> Don't duplicate data without adding value to it.

What about running multiple versions of a microservice in parallel -- don't each need their own yet separate databases that attempt to mirror each other as best they can?


I'm assuming you mean in production micro-service, if that's not the case, please elaborate a bit more...

The short answer is "no," as succinctly stated by the, I assume from the name, majestic SideburnsOfDoom. The versions shouldn't _EVER_ be incompatible with each other.

E.g. you need to rename a column.

Do not: rename the column, e.g. `ALTER TABLE RENAME COLUMN...`. Because, your systems are going to break with the new schema.

Do: Add a new column, with the new name, and migrate data to the new column, once it's good upgrade the rest of your instances, then drop the old column. Because, you can use both versions at the same time now without breaking anything. Yes, it can be a little tricky to get the data sync'd into the new column, but that's a lot less tricky than doing it for _every_ table and column.


Those are just different (versions of) interfaces for interacting with the same data source, ie no not several databases... But I'm no expert


No.


Sounds exactly like a project I am in. Not to mention that we have like three "microservices" that access the exact same database. Oh, and a single Git repo.

Yeah...


Google technically has one big repo. It just depends on how you do it.


Ok, then I'll just keep calm and continue with a distributed monolith. In 5 years it will be the mainstream and a new way to go. I can even imagine titles of newsletters: "Distributed monolith - the golden mean of software architecture".


The most basic thing I see people neglecting is that inserting a network protocol is really adding many additional components that weren't there before, often doubling or even tripling the amount of code, config, and documentation required. If there's a single large project that combines "A+B" modules with no networking and you split this into networked services, then you now have:

    1) "A" Component
    2) "B" Component
    3) "A" Server
    4) "A" Client
So for example if you started off with a single "project" in your favorite IDE, you now have 4, give or take. You might be able to code-gen your server and client code out of a single IDL file or something, but generally speaking you're going to be writing code like "B -> A client -> A server -> A" no matter what instead of simply "B -> A".

Now you have to worry about network reliability, back-pressure, queuing, retries, security, bandwidth, latency, round-trips, serialization, load-balancing, affinity, transactions, secrets storage, threading, and on and on...

A simple function call translates to a rats nest of dependency injection, configuration reads, callbacks, and retry loops.

Then if you grow to 4 or more components you have to start worrying about the topology of the interconnections. Suddenly you may need to add a service bus or orchestrator to reduce the number of point-to-point connections. This is not avoidable, because if you have less than 4 components, then why bother to break things out into micro services in the first place!?

Now when things go wrong in all sorts of creative ways, some of which are likely still the subject of research papers, heaven help you with the troubleshooting. First, it'll be the brownouts that the load balancer doesn't correctly flag as a failure, then the priority inversions, then the queue filling up, and then it'll get worse from there as the load ramps up.

Meanwhile nothing stops you having a monolithic project with folders called "A", "B", "C", etc... with simple function calls or OO interfaces across the boundaries. For 99.9% of projects out there this is the right way to go. And then, if your business takes off into the stratosphere, nothing stops you converting those function call interfaces into a network interface and splitting up your servers. However, doing this when it's needed means that you know where the split makes sense, and you won't waste time introducing components that don't need individual scaling.

For God's sake, I saw a government department roll out an Azure Service Fabric application with dozens of components for an application with a few hundred users total. Not concurrent. Total.


> 2.5 Bonus, there is likely no way to meaningfully decouple the services. Service X can be "tolerant" of service Y's failure, but it cannot ever function without service Y.

Nit: if service X could function without service Y, then it seems to follow service Y should not exist in the first place. And equivalently, the functionality of service Y before some microservice migration.


Classic example is recommendations on a product page. If the personal recommendation service is not available / slow to respond, you might fall back on recommendations based on the best sellers or even fall back further by not giving recommendations at all.

Recommendations are not necessary, and not showing them will significantly affect the bottom line, so you don't want to skip them if possible. But not showing the product page (in a timely manner), because the recommendation engine has a hiccup, is even worse for your bottom line.


Not necessarily. Maybe service X logs in a user and service Y sends them an email letting them know that someone logged in from a new location. Y adds value but it's not essential to X.


> Most people think a micro-service architecture is a panacea because "look at how simple X is," but it's not that simple. It's now a distributed system, and very likely, it's a the worst-of-the-worst a distributed monolith. Distributed system are hard, I know, I do it.

This line of argument fails to take into consideration any of the reasons why in general microservices are the right tool for the right job.

Yes, it's challenging, and yes it's a distributes system. Yet, with microservices you actually are able to reuse specialized code, software packages, and even third-party services. That cuts down on a lot of dev time and cost, and makes the implementation of a lot of of POCs or even MVPs a trivial task.

Take for example Celery. With Celery all you need to do to implement a queuable background task system that's trivially scalable is to write the background tasks, get a message broker up and running, launch worker instances, and that's it. What would you have to do to achieve the same goal with a monolith? Implement your own producer/consumer that runs on the same instance that serves requests? And aren't you actually developing a distributes system anyway?


> Take for example Celery. With Celery all you need to do to implement a queuable background task system that's trivially scalable is to write the background tasks,

that's a little bit of a straw man because that's not the "microservice" architecture this post is talking about. I personally wouldn't call that a "microservice" architecture, I'd call it, "a background queue", although strictly speaking it can be described as such.

what this post is talking about are multiple synchronous pieces of a business case being broken up over the http / process line for no other reason than "we're afraid of overarchitecting our model". This means, your app has some features like auth, billing, personalization, reporting. You start from day one writing all of these as separate HTTPD services rather than just a single application with a variety of endpoints. Even though these areas of functionality may be highly interrelated, you're terrified of breaking out the GOF, using inheritance, or ORMs, because you had some bad experience with that stuff. So instead you spend all your time writing services, spinning up containers, defining complex endpoints that you wouldn't otherwise need...all becuase you really want to live in a "flat" world. I'm not allowed to leak the details of auth into billing because there's a whole process boundary! whew, I'm safe.

Never mind that you can have an architecture that is all separate process with HTTP requests in between, and convert it directly to a monolithic one with the same strict separation of concerns, you just get to lose the network latency and complex marshalling between the components.


Programmers are, by and large, quite bad at not breaking encapsulation when they're dealing with a monolith. Not their fault, really, but when management comes to you and says, hey, can't you get it done in just a day, we really need this, and you know you could if you just hacked through that particular isolation barrier just this once, and yeah it will create bad bugs if things change in a particular way in the future, but you'll leave a comment so it will be fine and this way you can go home and have an ipa and finish re-runs of that show you like so you break the rules just this once. And that only happens a few more times, and then slowly bit by bit things become intertwined and rigid, and then five years down the road when half the people have left and the team has grown and other teams use the project and are trying to commit code to it you have to go to management, hat in hand, sorry, we need to rewrite it from scratch to keep adding features, new and exciting and unexpected things happen when we change things in the current project.

Independent services that create a coherent whole enforce isolation barriers. I don't believe in microservices. These things don't need to be micro. They can be just normal-sized services. I don't even particularly care internal to those barriers how bad things get. There are programmers that write shit I think is hot garbage and will cause all kinds of bugs as time goes on. But if they are confined to their space space, and they're solving their problem and are happy and making management happy then w/e. That piece of the whole will collapse and die eventually, but it won't take everything with it. It's just a piece, and it provided value for a while, so probably worth it from a business sense.

But when you have a monolith? And Developer Dave and Programmer Pete tell Big Boss Bob that they could sure get that feature out quick if only it weren't for those pesky rules preventing them from putting in a few mutexes in that one module so they can just read some data directly from it, and boy wouldn't it that be swell? Well Big Boss Bob says we need to get this feature SHIPPED BOYES so buckle up and put in those mutexes, and now the fucking thread goes into a deadlock state every so often but it's really intermittent so you spend hours debugging the damn things and late nights because yeah features gotta ship but shit gotta work, and you trace it back to Developer Dave and Programmer Pete and their change but what do you do? Big Boss Bob said do it that way and what? You gonna whine to Director Dan about it? Is that gonna get you back your late night spend figuring out what was going on? Nah.

Make systems that are just small that when people break them completely it doesn't mean the end of the world to throw it out and start over.


I'm really tired of immensely awkward and problematic design patterns that are intended to act as guard rails for good design. It is a myth that this actually works. A rushed project is a rushed project; a team that is inclined to overbuild will overbuild no matter what artificial constraints you give them up front. A microservice design that has concurrency problems can be extremely difficult to debug, in my personal experience this is easily much more difficult than debugging a monolith. Having to spin up 30 services in order to reproduce an issue that could otherwise be done in a single in-memory unit test is a real thing.


Not every tool is right for every job. But the idea that services provide no guardrails is, in my experience, categorically false. Like I said, not a fan of "micro", but there is a ton of inherent value in defining implicit isolation barriers. It's absolutely true that there is NO design impervious to idiots. This has always been true and will remain true until the heat death of the universe. But man, "hard" isolation barriers are nice to have, either through services or packages. I worked for a company and I was on a team that built a set of packages which were distributed to a bunch of other teams, and due to organization changes another team took ownership of one of those packages. In almost no time flat it became absolutely convoluted. Not my problem though, right? Here's the thing: it became buggy and hard for them to maintain, but it had no effect except for when using that particular package. I had worked on a project previously where there was a legit monolith, no packages or library delineations, one code base, and boy with all the teams sticking their fingers in that particular pie things got out of hand in a BAD way. It was so bad that certain teams would get into change wars where one group would make a change in a module that would break another group who would change it back and break the first group, and back and forth.

After my team had taken that project and split it into a bunch of smaller packages, it's not like everyone magically became better programmers. The same people who were introducing fucking spaghetti code that did who-knows-what in overly complex ways were still around, and they were given effectively their own sandboxes. In fact quite a few more teams within the company began using those packages, which just became modules they shipped with. We no longer had to deal with people screwing with core packages because they no longer had ownership, they could no longer make merges into that area of the code base, so we could keep things stable.

So I'm a hard sell on monoliths. Like, I'm not actually pro micro-services, per se. I'm mostly just anti-monolith. Giant code bases with multiple teams simultaneously contributing are doomed to fucking disaster.


One last comment on this, if you have like 5 guys just hacking away on some project and you feel that splitting it up into 30 micro-services is the only way to make things work then you've got problems. I'm not talking about small teams building straightforward systems. I'm talking about giant organizations building giant complicated systems, trying to design and manage that.


> Programmers are, by and large, quite bad at not breaking encapsulation when they're dealing with a monolith.

What is there about microservices that makes them better at this? Most of the stuff I have seen has been highly coupled.


Hard process boundaries, largely. I mean, unless you use mmio in which case you're probably going out of your way to break things. Also design constraints should be taken into account- if you need to build highly performant systems, yeah, probably shouldn't have a bunch of services talking to each other over http/json. But you could probably also isolate the part of the system that actually needs to be performant.

Also, like I said, not a fan of the "micro" terminology, it just confuses the issue.

And keep in mind that coupling is not the opposite of encapsulation. Encapsulation just means hiding internal state. So unless you're going completely out of your way to break things, you'll have services that communicate through some message passing protocol, which means services are inherently encapsulated. It's not that you can't break that, it's that generally it's hard.

Here's an example of breaking it, though:

You have an API that talks to the database. You need to get users, groups, and preferences. Instead of writing different endpoints to access all of these things individually, clever you decides to simply make an endpoint that is "postQuery" and you give it a sql string and it will return the results. Great. You have now made the API effectively pointless.

Another example: you have an API that needs to perform standard accounting analysis on an internal dataset. Instead of adding endpoints for each calculation (or an endpoint with different inputs), you create an endpoint "eval" that will internal eval a string of the language of your choice. Congrats, you can now use that API to execute arbitrary code! No need to define any more pesky endpoints.

So yeah, absolutely people can make shit decisions and build garbage. It's entirely possible. But hey, at least it's pretty obvious this way and if you see your team do this you can look for another job.


Physical isolation. I don't actually like the arguments but I can't quite say it's wrong, because the Java landscape just put a huge amount of effort into modularity and encapsulation because developers kept using reflection to bypass module boundaries. Usually for performance reasons. With a microservices architecture that is impossible anymore because there is physically no way to read across address space boundaries without sending a message and introducing an RPC, which would require the support of the target module.

Now that said, with the new jigsaw module system in the jvm, and the multi-language support that is constantly getting better, a disciplined enough senior management team could enforce module boundaries within the process. It means any change to jvm command line flags would require the approval of the most senior tech lead, because that's how module boundary enforcement can be disabled, but if you have that and it works you would get significant performance and simplicity benefits.


You'll never ever be able to stop bad developers from making poor choices and ruining things.

So you have encapsulated services... boss comes and says "we need feature X right away". What if feature X spans all of your microservices? The bad programmer will hack together a monstrosity between multiple services. It's the micro-lith problem: instead of a monolith, you have a monolith disguised as micro services. Now their poor choices are spread across a lot of services and distributed, it's not really confined to just one service.


> What if feature X spans all of your microservices? The bad programmer will hack together a monstrosity between multiple services.

Wait, what is a good programmer supposed to do in this scenario?


> that's a little bit of a straw man because that's not the "microservice" architecture this post is talking about. I personally wouldn't call that a "microservice" architecture, I'd call it, "a background queue", although strictly speaking it can be described as such.

It is not a strawman; it's a concrete example of the technical, practical, operational, and economical advantages of microservices architecture, more specifically service reuse, specially managed services provided by third parties.

While you're groking how a multihreadig library is expected to be used, I already fired a message broker that distributes tasks across a pool of worker instances. Why? Because I've opted not to go with the monolith and went with the microservices/distributed architecture approach.


> While you're groking how a multihreadig library is expected to be used, I already fired a message broker that distributes tasks across a pool of worker instances. Why? Because I've opted not to go with the monolith and went with the microservices/distributed architecture approach.

I'm so confused by that statement... because I can't for the life of me figure out how you got there.

You absolutely can have a monolith which is multi-threaded or asynchronous and "resource/task pools." The JVM for instance has threads, I use BEAM (Elixir) personally, and it's even pre-emptively scheduling my tasks in parallel and asynchronously... but, I still don't get what multi-threading has to do with microservices.

Microservices and monoliths are boundaries for your application they aren't implementation details (i.e. all microservices must be asynchronous is strictly not true) in and of themselves, they're design details. That design can influence the implementation but they are separate.

Ex. there are plenty of people who use Sidekiq and redis like you're using Celery but don't call it a microservice. It's just a piece of their monolith since it's largely the same depdencies.


beam is god damn magical. It would be hard to replicate that kind of decentralized monolith without considerably more work with any other technology.

I mean consider a gen server mounted in your supervisor tree. its 5 min of work; tops. Doing the same with kubernetes would require coordinating a message broker, picking a client library, creatig a restart strategy and networking. all of which would add considerably to your development time


You can have a worker pool with a monolith too. Just run multiple copies of your app.


> What would you have to do to achieve the same goal with a monolith?

You would take Celery, get a message broker up and running and launch some worker instances.


I'm completely in your camp, and I'm surprised by the lack of nuance HN seems to show (especially regarding micro-services & Kubernetes).

There are many benefits to having microservices that people seem to forget because they think that everyone interested in microservices is interested in splitting their personal blog into 4 different services.

They take coordination, good CICD, and a lot of forethought to ensure each service is cooperating in the ecosystem properly, but once established, it can do wonders to dev productivity.


I can't tell if my project is a monolith or microservices, but it's going well so far. We use a single scalable database instance as a message broker and persistence source, and have a common framework implements distributed algorithms that every service uses to expose "OS-like" constructs (actor activation, persistent collections, workflows, Git etc. All communication is done through independent protocols.. there's not much coupling between protocols (except for common ones like "schedule this job on this device"), so it's not really a cobweb of dependencies, but everything relies on that single database.

I think if the database gets too overloaded I'll partition certain tree nodes across multiple masters (this is feasible because the framework doesn't rely on a single timestream).

With the level of shared code (the framework) and the single database, it's somewhat monolithic but the actors themselves are quite well-behaved and independent on top of it.


The understanding is your monolith may use third party services. Nobody is writing SQL engines here.


I'm a database guy, so the question I get from clients is, "We're thinking about breaking up our monolith into a bunch of microservices, and we want to use best-of-breed persistence layers for each microservice. Some data belongs in Postgres, some in DynamoDB, some in JSON files. Now, how do we do reporting?"

Analysts expect to be able to connect to one system, see their data, and write queries for it. They were never brought into the microservices strategy, and now they're stumped as to how they're supposed to quickly get data out to answer business questions or show customers stuff on a dashboard.

The only answers I've seen so far are either to build really complex/expensive reporting systems that pull data from every source in real time, or do extract/transform/load (ETL) processes like data warehouses do (in which the reporting data lags behind the source systems and doesn't have all the tables), or try to build real time replication to a central database - at which point, you're right back to a monolith.

Reporting on a bunch of different databases is a hard nut to crack.


This is what gave rise to data lakes. The typical data lake maturity model I see in enterprise is:

1. Pay a ton of money to Microsoft for Azure Data Lake, Power BI, etc.

2. Spend 12 months building ETLs from all your microservices to feed a torrent of raw data to your lake.

3. Start to think about what KPIs you want to measure.

4. Sign up for a free Google Analytics account and use that instead.


> Spend 12 months building ETLs

Okay, sounds reasonable enough for a complex enterprise.

> to feed a torrent of raw data to your lake

Well, there's the problem. Why is it taking a year to export data in its raw, natural state? The entire point of a data lake is that there is no transformation of the data. There's no need to verify the data is accurate. There's no need to make sure it's performant. It's just data exported from one system to another. If the file sizes, or record counts match, you're in good shape.

If it's taking a year to simply copy raw data from one system to another, the enterprise has deeper problems than architecture.


If you are "export[ing] data in its raw, natural state" then haven't you lost the isolation benefits of microservices? Now you have external systems dependent on your implementation details, and changing your schema will break them.


That's a problem for the future data engineers to deal with. The data lake is an insurance policy so you only need to think about these problems if you later want the data. If you already know you want to analyze the data, then a data lake is not a good choice.

Yes, it makes life harder for the data engineers in the future, but it might turn out that analysts only ever need 5% of the data in the lake, and dealing with these schema changes for 5% of the data is easier than carefully planning a public schema for 100% of it.

It can be helpful to include some small amount of metadata in the export though, with things like the source system name, date & time, # of records, and a schema version. Schema version could easily be the latest migration revision, or something like that.


The data lake is also a real, live GDPR PII time bomb if you worked out how to get the data in but not take it out


But if I haven't spent the effort to extract it, do I really own it? Let me argue that I don't have it because all my implemented queries turn up none of your data. You wouldn't tax me on gold that hasn't yet been extracted, would you? (End of joke.)


  But if I haven't spent the effort to extract it, do I really own it? 
If you collected it, you are responsible for it.


What I think you'd typically do is put different data under different keys/paths, so that red is personally identifiable data, yellow contains pointers to such data, and green is just regular data. You could have a structure like s3://my-data-lake/{red|yellow|green}/{raw|intermediate}/year={year}/month={month}/day={day}/source={system}/dataset={table}

Then you just don't keep red data for longer than 30 days.


Changing the schema of an upstream data source almost always breaks or requires updates to the downstream analytics system. It's an unavoidable problem whether its a microservice or a monolith; you just get to choose where you put the pain.

Consider:

Source Data -> Data Lake -> ETL Process -> Reporting DataWarehouse(s)/DataMart(s) -> User Queries

vs

Source Data -> Data Lake -> User Queries

vs

MonolithDB -> User queries

vs

MonolithDB -> ETL Process -> Reporting DataWarehouse(s)/DataMart(s) -> User Queries

A schema change in the source data should be easily updated in the ETL process in example 1. Most changes are minimal (adding, removing, renaming columns). And for a complete schema redesign in the source data, a new entry in the data lake should be created and the owners of the ETL process should decide if the new schema should be mangled to fit their existing reporting tables or to build new ones. Across the four models I outlined above, the first is by far the easiest to update and maintain, IMO.


If the benefit of microservices is primarily an organizational one around ownership, Conway's Law, and so on, then Example 1 still seems problematic, because it's likely a different team that has to deal with the fallout.

Another strategy is that the service has an explicit API or report specification; that way, the team that owns the services also owns the problem of continuing to support that while changing their internal implementation.

Of course, whether the benefits are worth the cost is probably organization specific, just like microservices in general.


The key insight and definition of a data lake was pulling data and storing in it's raw form form across the enterprise.

https://en.wikipedia.org/wiki/Data_lake

The reasons for it are hard to explain succinctly in HN comment but if you look up data lake there will be a lot of explanation. But it basically comes down to "is it better for the data integrators or the data consumers to massage the data?" And data lakes was the insight it's really great when the consumers decide how to massage the data.


This implies that the engineers who lobbied so hard for microservices were even all that concerned with these benefits to begin with, and took this into account when designing the architecture of the system. More often than not, in my experience, the developers involved are more concerned with code ownership than genuine architectural concerns.


Yeah I think you'd want the microservice to expose an bulk export API point, to an API specification. It might need to transform data most likely and possibly ignore some data. And then you grab data from those and piss them into the lake. The lake now conforms to your published APIs.

To me this sounds great. And honestly you should do the same thing with a monolith. Nothing worse than "oh you can't make that schema change because a customer with a read-only view will have broken reports".


You also lose benefits of microservices if they have to stop and change the data exporting system all the time too, slowing down development.

The main benefit of a dumb copy is that the production service is not impacted by reporting, only a copy is. This relates to performance (large queries) but also implementation time.


One way to avoid this is to have the microservices to publish data changes to some other (monolith) system, like an MQ system with a specified scheme for the payload.

On the other hand, the notion that “microservices == completely independent of everything else” is an unrealistic one to hold.


Where I was a bit more than a year ago they hired a real expensive consultant who did a data lake project which wasn't finished by the time my team were told to use it, so we rolled our own according to that team's instructions and best practices (best practices were a huge deal at this company, everything we did was best practices). We exported data s3 under a particular structure and I built an ETL system around that and spotify's Luigi. Noone else on my team knew what ETL was, which made me feel old. We spent two, maybe three months on this. The BI team got their data and so did the marketing automation team.

But yeah, it's funny how these projects get complicated in larger organizations. Personally I would have rolled something even simpler on gnu/posix tools and scripts, in rather less than a month.


What does Google Analytics have to do with datalakes? Are you talking about some specific scenario?


Google Analytics is associated with gleaning useful, actionable insights from your users' behavior on your web sites and in your apps, which was what the guy who sold you on the concept of a data lake was promising.


While this is absolutely true in my experience and Google Analytics has handled most of my needs in contrast with a homegrown data lake or ETL, there's always the spectre of Google pulling the rug out from under you with service shutdown or massive price increase.

Use off the shelf stuff but be prepared to have to move in a (relative) hurry.


"Google Analytics" is, as I understand the original context, shorthand for "something simple, cheap, and immediately useful." Feel free to substitute Mixpanel or some Show-HN-Google-Analytics-replacement Docker image or whatever.


Hook Microstrategy up to your lake then. If they just want inbound analytics and conversions then IT were probably recommending a data lake out of their own want to make it, rather than the actual need.


Who besides your hypothetical salesman is arguing that the raisin d’etre for data lakes is user web analytics?


In my experience people arguing for data lakes talk a lot about some unspecified future benefit. Companies typically want to learn things about their interactions with their customers that allow them to make more money, and thus GA or its equivalent represents the 80/20 — or more likely 80% of the benefit for 1% of the cost — solution.


Your statement implies a lot of assumptions about a business’s model. Our company cares about user interactions in our app, software development metrics, quality metrics, sales metrics, etc. GA is just one small piece of the puzzle.

“Data lake” may not be the right answer, but GA certainly isn’t.


I'm mostly being sarcastic but I'm partly describing an exact scenario that I'm witnessing right now. Business wants "analytics". IT starts spending a load of money. Business has no idea what it's for and buys their own thing to just track website visits which is what they wanted.


> Some data belongs in Postgres, some in DynamoDB, some in JSON files. Now, how do we do reporting?

One of the key concepts in microservice architecture is data sovereignity. It doesn't matter how/where the data is stored. The only thing that cares about the details of the data storage is the service itself. If you need some data the service operates on for reporting purposes, make an API that gets you this data and make it part of the service. You can architect layers around it, maybe write a separate service that aggregates data from multiple other services into a central analytics database and then reporting can be done from there or keep requests in real time, but introduce a caching layer or whatever. But you do not simply go and poke your reporting fingers into individual service databases. In a good microservice architecture you should not even be able to do that.


Sorry, but "making an API that gets you this data" is the wrong answer.

Most APIs are glorified wrappers around individual record-level operations like- get me this user- or constrained searches that return a portion of the data, maybe paginated. Reporting needs to see all the data. This is a completely different query and service delivery pattern.

What happens to your API service written in a memory managed/garbage-collected language when you ask it to pull all the data from its bespoke database, pass it through its memory space, then send it back down the caller? It goes into GC hell, is what.

What happens when your API service when it issues queries for a consistent view of all the data and winds up forcing the database to lock tables? It stops working for users, is what.

There are so many ways to fail when your microservice starts pretending it is a database. It is not. Databases are dedicated services, not libraries, for a reason.

It is also true that analysts should not be given access to service databases, because the schema and semantics are likely to change out from under them.

The least bad solution? The engineering team is responsible for delivering either semantic events or doing the batch transformation themselves into a model that the data team can consume. It's a data delivery format, not an API.


>It is also true that analysts should not be given access to service databases, because the schema and semantics are likely to change out from under them.

Its not perfect but what we do is create a bunch of table views that represent each of the core data types in the system. We can then do all of the complex joins to collect the data analysts want in to an easy to query table as well as trying to keep the views consistent even as the db changes.


So you have a single database for all your microservices?


>It goes into GC hell

Can you exapand on this a little? Or a paper that I can read?


The service will need to read all its data and put it into objects, then extract the data from the objects to report it, then garbage collect all of that. For every single record in its entire data set.

You could say but oh, why not just return the underlying data without making objects? Well now you are exposing the underlying data format, which is what we’re trying to avoid by giving this job to the service.


And thus such patterns lead to the absurdity where 90% of enterprise apps do little actual computations beside serializing and deserializing JSON (or XML if a "legacy" app).


It's remarkable what you can do with just functions and nested data structures. Used to be big on the whole OOP thing, data roles, so much effort for so little.

Now I try to think about problems as "I have input data of shape X, I need shape Y" and fractally break it down into smaller shape-changes. I am kinda starting to get what those functional programmers are yammering on about.


The parent comment said “is asked for all records..GC hell “.

Since a micro service deals with only its own data and reporting is then across services, we’d need to query across services to get data and make sense of it. If we’d ever need to query all records, then such records would become domain objects in the micro services first before being passed along. A large number of domain objects would require a large amount of memory. Processing and releasing domain objects will result in GC on the released objects.


Wait, I would assume that the people in need of reporting would have a pretty good idea of what those reports should look like. That means you know exactly what data needs to be read from a data store optimized for reporting. Each micro-service contributes their share of data to a data store optimized for reading. This is a text-book use case for a non-relational document store. I'm really not seeing what's so difficult about building such a process.


Reporting and non-relational are like oil and water, coming from experience working with people who make reports.

It’s not like they come up with every report they think they might need while the micro service is being architected. They come up with a new report long after engineers have moved on. If it’s a SQL database, no problem. If it’s some silly resumeware data store, then what?


Yeah, this.

If you can't ask questions you didn't think of in advance, you didn't collect data.


Real question:

Why pull it into memory like that? Why not just pump it through a stream?


A stream would be the correct way to handle that problem, so backpressure can be used to prevent too much memory churn.


I came here simply to echo this statement! Design a reporting solution that is responsible for ingesting data from these micro services' persistence layers. Analysts should only ever be querying this reporting solution and should not be allowed to connect directly to any micro service persistence layer or API.

We have a whole industry around Analytics and Data and the tools and processes to build this reporting layer is well established and proven.

Nothing will give you as many nightmares as letting your analysts loose on your micro service persistence layers!


This is seriously why my company still has monoliths.

Our databases are open to way too many people. What's worse, they are multi tenant making refactoring really hard.


Having more than one schema owner is practically a death sentence for development and engineering...

We used to have a few of those, especially on exadata clusters. Finally carted them out of the local dc after moving to RDS Aurora databases with strict policies. Might have caused 3 or 4 people to quit, but totally worth it for the 500+ people that stayed who now can own their data, schema and development (and be held responsible for it! -- another issue of multi-db-access, it's always someone else's fault). Went from deploying once a day with a 'heads up' message to no-message deploying multiple times per hour.


Why monoliths? Everyone still wants to to have OLAP and OLTP systems where analytics are done on OLAP. Where having this separation you can get data from multiple sources to put into your analytics.

I cannot imagine people not doing that and having need to have stats in real time. For most shopping/banking stuff you can get away with once in 24 hours dumps and then analytics can be done on that.


> But you do not simply go and poke your reporting fingers into individual service databases.

This is why I distrust all of the monolith folks. Yes, it's easier to get your data, but in the long run you create unmaintainable spaghetti that can't ever change without breaking things you can't easily surface.

Monoliths are undisciplined and encourage unhealthy and unsustainable engineering. Microservices enforce separation of concerns and data ownership. It can be done wrong, but when executed correctly results in something you can easily make sense of.


You're saying "monoliths encourage unhealthy engineering" and then in the next sentence say "when executed correctly" for microservices. That sounds like a having/eating cake type situation.


Not exactly. It's hard to tell from the outside if a monolith was architecture well or is about the fall over.

In a microservice architecture it's harder to pretend you're doing it right.


> In a microservice architecture it's harder to pretend you're doing it right.

After seeing a few of them, I'd say: "it's less embarrassingly obvious that you're doing it wrong."

But dig into the code for a few endpoints and it usually don't take long to find the crazy spaghetti and the poorly-carved-out separation of responsibilities breaches.


I disagree, "doing it wrong" just looks different there.


The argument (which I sort of buy) is that microservices provide rails that keep people from doing certain stupid things like N clients depending on the data schema (making the schema a de-facto public interface).

The trick with microservices is that the ecosystem is maturing and there are still lots of ways to screw up other things that are harder to screw up with monoliths. In time 95% of those will go away (my specific prediction is that one day we will write programs that express concurrency and the compiler/toolchain will work out distributing the program across Cloud services--although "Cloud" will be an antiquated term by then--including stitching together the relevant logs, etc and possibly even a coherent distributed debugger experience).


You basically just described Erlang/OTP there.


To be fair, this is how I've seen tech decisions presented at most big tech companies.


Quit talking about what is behind the curtain


> It can be done wrong, but when executed correctly [...]

Quite the self-fulfilling prophecy there.

> Yes, it's easier to get your data, but in the long run [...]

Systems can and should be evolved and adapted over time. E.g. deploying components of the monolith as separate services. You can't easily predict what the requirements for your software going to be in say 10 years.

And depending on the stage a company is, easy access to data for business decisions outweighs engineering idealism.


> easy access to data for business decisions outweighs engineering idealism

I think there are different levels of sophistication of "engineering idealism". GP talks about "data ownership", and I get the desire to keep the data a microservice is responsible for locked in tightly with it. But let's be precise why it's good: because isolating responsibility reduces complexity. Not because code has some innate right to privacy.

In my own engineering idealism, there's no internal data privacy in the system. Things should be instrumentable, observable in principle. If an analyst wants to take your carefully designed internal NoSQL document structure and plug it into an OLAP cube for some reason, there must be a path to doing that; if that's an expected part of the business, the service needs to have it on the feature list, that this should be doable without degrading the service.

Software needs to be in boxes because otherwise we can't handle it mentally, but the boxes really shouldn't be that black.


Isolating responsibility reduces complexity for that piece of code. It increases complexity for assembling the whole thing into a holistic package, which is usually what analytics primary need is.

YMMV, but the tradeoff is less complexity at the SWE/prod department, and more at the analytics team.


> But let's be precise why it's good: because isolating responsibility reduces complexity.

The thing is, it just shifts around complexity. Once you have microservices, you have to deal with a bunch of new failure modes, plus a bunch of extra code whose only purpose is to provide an interface to other services. And in terms of separating data, the worst part is that you've prevent access this data with some other data within the same transaction.


> Quite the self-fulfilling prophecy there.

Microservices require your organization to have an engineering culture. I would be afraid of introducing them at, say, Home Depot where (I've heard) your average programmer doesn't even write tests.

If you have engineering talent within a small multiplicative factor of Google (say 0.5), then you can pull off Microservices at your org.

Edit: I'm being downvoted, but I don't think it's a dangerous assumption or point to make that it takes a certain amount of discipline and experience to implement microservices correctly. When you have that technical capacity and the project calls for it, the benefit is tremendous.


I think you're being downvoted because you're implying monoliths don't require an engineering culture and that microservices are a silver bullet in getting systems built correctly.

I've seen good and bad in each approach. It's certainly possible to enforce good SOCs and proper boundaries in monorepos, and also possible to plough a system into the ground with microservices.

They're all just tools in your toolbox and both have a part to play in modern development.


You’d be surprised how sophisticated Home Depot is. They switched their monolith to microservices using Spinnaker and even contributed back to Spinnaker.


My client is doing a lot of it wrong. To be fair, they got sold a lot of really horrible and ridiculous advice from IBM consultants (is there another kind?), but they also have people in charge (organizationally and technically) who aren't great decision-makers.

As the article says though, you can't fix a people problem (bad engineering practices and discipline) by going from one technology to another (monolith to microservices).


Only when done by folks that never learned how to write modular code and package libraries.

The same folks aren't going to magically learn how to do distributed computing properly, rather they will implement unmaintainable spaghetti network calls with all the distributed computing issues on top.


And untangling a monolith tends to be much less problematic that untangling a bunch of microservices. For one thing, you can refactor/untangle it all offline, do your testing, and do a single release with the updated version, as opposed to trying to coordinate releases of a bunch of services whose interfaces/boundaries were poorly defined.

DDD enforces seperation also.

It's about code quality, microservices are easy replaceable. Modules are too.

With both systems, the core part ( eg. mesh, Infrastructure, ... ) Is crucial.

I think experienced developers can see this, the ones that actually delivered products and had big code changes. The ones that handled their "legacy" code.

Microservices are just a way to enforce it, there are others. None are perfect or bad, both have their use-case.


I do not claim expertise here, but it would seem like microservices would add significant performance costs. Stitching together a bunch of results from different microservices is going to be a LOT more expensive than running a query with joins.


Humans are the most expensive part of the system. You have to make it easy for humans to understand and change the system, and at the end of the day that's the number one thing to optimize for. This is why microservices are compelling.

But to speak directly to your concern, you have to think about service boundaries and granularity correctly. Nobody is saying make a microservice out of every conceivable table. Think about the bigger picture, at a systems level. Wherever you can draw boxes you might have a service boundary.

Why would you need to join payment data to session and login data?

Do you need to compare employee roles and ACLs against product shipping data?

These things belong in different systems. If you keep them in the same monolith, there's the danger that people will write code that intertwines the model in ways it shouldn't. Deploying and ownership become hard problems.

The goal is to keep things that are highly functionally related together in a microservice and expose an API where the different microservices in your ecosystem are required to interact. (Eg, your employees will login.)

When the data analytics folks want to do advanced reporting on the joins of these systems (typically offline behavior), you can expose a feed that exports your data. But don't expose an internal view of it to them or they'll find ways of turning you into a monolith.


In my experience it is a lot more difficult to navigate around all the different microservices to understand what needs to be done compared to being in a monolith where you can jump from file to file.

Also then what also happens is microservices are created using different languages which in turn adds so much complexity to understand what is going on on the whole big picture level.

And code gets repeated a lot more. If there is change in a microservices or update everyone will need to figure out what services depend on and how they will have to adapt. With monolith you can just use your IDE to see what will break if you make a change. So much repeated business logic. Creating a new feature involves having to have many meetings to figure out what services in which way have to be updated.

It is crazy mess in my opinion.

I have been with a company that had monolith application which they split up to more than 15 services (some python, some js, Scala, Java, etc...). Monolith still is used for some parts that are not migrated. I was working on single service having no idea how the whole system worked together. Then I had to do something in the old parts and I very quickly got an understanding how everything works together.


>And code gets repeated a lot more. If there is change in a microservices or update everyone will need to figure out what services depend on and how they will have to adapt. With monolith you can just use your IDE to see what will break if you make a change. So much repeated business logic. Creating a new feature involves having to have many meetings to figure out what services in which way have to be updated.

This is what people mean when they say "distributed monolith" vs. microservices.


I work on a monolith with a team experimenting in microservices and good lord do I hate it. The microservice represents a required step in our user flow, and due to the way we're set up I have to spin up my own private copy. Very often there have been configuration or API changes that were not communicated to me and so for the past few months that service have been broken and I've managed to avoid it for the most part. When I can't, I find it is faster to simply re-assign existing database records or simply bullshit them in a database editor rather than deal with the "why isn't the XXXXXXXXXXXX service working for me again?" flavor of the day

And holy fuck is debugging that stuff difficult. HUUUUUGE waste of time, but management looooooooooves their blasted microservices...


Programming complexity is changed to devops complexity.

With microservices, without a good documentation how it connects, it's going to leave a very bad impression.


Having to have that documentation, finding, reading, understanding and trusting it already adds so much overhead.

It is still nowhere close to ability to jumping around with IDE.

It might be in a different language, different design patterns and to get to the details you have to check out that project anyway because you can't document absolutely everything out of code base. And if you do you will end up with multiple sources of truth.

It is so much more likely that for every little issue which you otherwise might be able to find an answer to yourself very easily you will have to contact the team owning that microservices.

It is not only mentally exhausting. It is time consuming, it requires so much back and forth. It creates so much dependence on other people because figuring out how things are related is so much more difficult.

Sometimes I have 8 or more different IDE windows open to understand what is going on.


> you have to think about service boundaries and granularity correctly.

This is the hardest part.. I'd argue that this is almost impossible to do correctly without significant domain modeling experience.. also microservices by nature make this hard to refactor these boundaries (compared to monoliths where you'd get compile time feedback)

I prefer to make a structured monolith first (basically multiple services with isolated data that are squished together into a single deployable) and pull them out only if I really need to... Also helps with keeping ms sprawl under control


If you already can't serve your requests from one DB, and you already want to factor out the analytics stuff, the long running background queries, modularize the spaghetti, scale the maintenance load, CI build + testing time, etc...

That's what SOA and microservices is supposed to solve.

At that scale you do reporting from a purpose-built service.

Allegedly.


We do a lot of reporting that when. Then the users are unhappy that the data is slightly "stale". It serves some purposes, but not all purposes.


That wouldn't be a microservice then.

There's going to be a relationship between data in your services, but it shouldn't be directly referential.


> enforce separation of concerns and data ownership

You can enforce separation of concerns and data ownership in a monolith just as much as you can not enforce these two characteristics in a micro service architecture. Microservices and monoliths are a discussion about deployment artifacts, full stop.


> you create unmaintainable spaghetti that can't ever change without breaking things you can't easily surface.

How does creating a tangle of microservices (effectively distributed objects) really solve the problem?


Microservices provide an abstraction. That is kind of the point. If you feel like the data yours service operates on would be better off stored in a redis database instead of an RDBMS, you can rewrite your persistence layer, test and roll out the new version of the service. As long as your APIs do not change, nobody cares how you produce responses to requests. In a monolith, this would be a nightmare. You don't have a single persistence layer to change, you have to go through every module, find all the places where this specific table or tables are being accessed and change retrieval and storage functionality everywhere.


> Microservices provide an abstraction.

So do modules/classes/interfaces etc. You don't need a layer of HTTP in between components to have abstraction.

In addition, it feels like microservices solve a problem that very few people really have. I've never run into a case where I though "boy, I'd sure like to have a different database for this one chunk of code". If that did happen, then sure, split it out, but I can hardly believe that splitting your entire code base into microservices has a net benefit. The real problem in nearly every project I've worked on is complexity of business logic. A monolith is much easier to refactor, and you can change the entire architecture if you need to without having to coordinate releases of many different applications.


Seems to me you are talking about a database access layer instead of microservices.

My understanding of microservices is a bunch of loosely connected services that can be changed with minimal impact to the others

Problem with the ideal is in reality this never works as complexity grows the spaghetti code moves to spaghetti infrastructure ( Done a network map of a large k8s / istio deployment lately ? )


The impact would be minimal only if the API of the microservice didn't change. But in the same codebase too, if you have a module whose API doesn't change the changes from refactoring it would likewise be minimal.


Constructed good, it's Ravioli and not spaghetti.


But that's true of a well-constructed monolith, too. And it has far fewer failure modes and less complexity in general.

>Microservices enforce separation of concerns

Depending on where you work, it can be a problem, because the separation is not always appropriate, and can for political reasons be much harder to revert when visible at the service level (for example because the architect doesn't understand consistency, or because your manager tells you that the distributed architecture documentation has been sent to the client so it cannot be modified).

In case of undue separation, reworking the internals of the enclosing monolith should have less chance to cause frictions.


In practice, splitting your code into consumable libraries/modules works equally as well.

Then your monolith is just all the modules glued together.


Splitting code into libraries works better, it's simpler and faster. The only thing micro services bring to the table is being to deploy updates independently (although this is also possible with libraries). If you don't need to deploy independently then micro services are useless complexity, if you can't deploy independently then you've got a distributed monolith.


A co-worker had a smart solution for this: your service's representation in a reporting system (a data warehouse for example) is part of its API. Your team should document it, and should ensure that when that representation changes information about the changes is available to the people who need to know it.

This really makes sense to me. I love the idea that part of a microservice team's responsibility is ensuring that a sensible subset of the data is copied over to the reporting systems in such a way that it can be used for analysis without risk of other teams writing queries that depend on undocumented internal details.


your service's representation in a reporting system

At what point in time?


At the beginning: https://docs.pact.io/


> But you do not simply go and poke your reporting fingers into individual service databases. In a good microservice architecture you should not even be able to do that.

I agree. In a monolith architecture, though, you CAN do that (and many shops do.) That's where their pains come from when they migrate from monolith to microservice: development is easier, but reports are way, way harder.


> when they migrate from monolith to microservice: development is easier [...]

Not even that -- that idea is still highly debatable.

I would argue that it absolutely isn't easier, and the stepping-back-in-time of developer experience is one of the biggest problems with microservices.

Microservices in general, are way, way harder.


What you are describing certainly isn't unique to microservices.


> you do not simply go and poke your reporting fingers into individual service databases

Side point: This is a needlessly hostile and unprofessional way to refer to a colleague. Remember that you and the reporting/analytics people at your company are working towards the same goals (the company's business goals). You are collaborators, not combatants.

You can express your same point by saying something like "The habit of directly accessing database resources and building out reporting code on this is likely to lead to some very serious problems when the schemas change. This is tantamount to relying upon a private API." etc.

We can all achieve much more when we endeavor to treat one another with respect and assume good intentions.


This is an incredible overreaction to an entirely innocuous comment.


I've noticed reporting/analytics people going extinct around my workplace as micro services make monitoring easier. There might be some pent up hostility towards the technology side


If you think telling colleagues not to "simply go and poke your reporting fingers into" things won't insult them or put them on a defensive footing, I encourage you to try it and closely note the reception you receive. In my experience, people do not appreciate being spoken to like that.


They didn't tell their colleagues to do that, they made a slightly humorous comment on a hacker news thread.


We’re colleagues by virtue of the fact that we’re members of the same profession.

Anyway, what’s the reason not to treat people on hackernews with the same respect you’d treat a coworker with?


I think I prefer the poking around analogy. I can immediately visualize why it's bad, and it doesn't have the word "tantamount".


I'm going to disagree heavily here. The world of cloud computing, microservices, and hosted/managed services has made the analyst and data engineers job easier than ever. If the software team builds a new dynamodb table, they simple give the AWS account for the analytics team the appropriate IAM permissions and the analytics team will set-up an off-peak bulk extract. A single analyst can easily run an entire data warehouse and analytics pipeline basically part time without a single server using hosted services and microservices. With a team of analysts, the load sharing should be such that the ETL infrastructure is only touched when adding new pipelines or a new feature transformation.

And for data scientists working on production models used within production software, most inference is packaged as containers in something like ECS or Fargate which are then scaled up and down automatically. Eg, they are basically running a microservice for the software teams to consume.

Real time reporting, in my opinion, is not the domain of analysts; it's the domain of the software team. For one, it's rarely useful outside of something like a NOC (or similar control room areas) and should be considered a software feature of that control room. If real-time has to be on the analysts (been there), then the software team should dual publish their transactions to kinesis firehouse and the analytics team can take it from there.

Of course, all of this relies heavily on buy-in to the world of cloud computing. Come on in, we all float down here.


Cloud computing helps here, but microservices still make this harder. Some of the data is in Dynamo, some of it is in Aurora, some of it is in MySQL RDS, some of it is in S3, and nobody knows where all of it is at once.


From a project management perspective, each data source should have some requirements behind it from the business team. Those requirements should be prioritized meaning you can prioritize which data source to tackle first. You automate the process in AWS data pipelines for that data source, write the documentation for the next analyst, and move on to the next data source.

The complexity you and the OP seem to be describing are more in the management and prioritization of analytics projects than in the actual "this is a hard technical problem" domain. It's just a lot of it is tedious especially compared to "everyone just put all your data in the Oracle RACs and bug the DBA until they give you permission" model of the past.


Also one of the service teams might need to change their schema, which the reporting team needs to adjust their process to handle that. That's fine, but they need to know that in advance, and then they might have a backlog of other things that they need to do, and then some other teams schema changed without notice, so now they always have to play catch-up.


What/where do you run this mythical one-analyst pipeline, though? Is that in cloud services too? Airflow? Kubeflow? Apache Beam? It sounds like you're just pushing the problem around.


AWS data pipelines and AWS lambda. It's cloud services the whole way down.

https://aws.amazon.com/datapipeline/


I saved a company 20k a month by creating a job server in AWS. Lambda isnt cheap when you start using it hard


Lambda is mostly used for it's trigger functionality for data or artifacts that are created at irregular intervals. Eg, an object is uploaded to s3 which triggers a lambda which runs a copy command for that object into redshift. The kind of stuff that's well below the threshold for leaving the free tier.


It's a little sad because originally, people thought there would be a shared data base (now one word) for the whole organization. Data administrators would write rules for the data as a whole and keep applications in line so that they operated on that data base appropriately. A lot of DBMS features are meant to support this concept of shared use by diverse applications.

What ended up happening is each application uses its own database, nobody offered applications that could be configured to an existing data base, and all of our data is in silos.


Do you know why the shared database vision didn't work out? Because I still think it would be the best approach for many companies. Most companies are small enough that they could spend less than $10k/month for an extremely powerful cloud DB. Then you could replace most microservices with views or stored procs. What could be simpler?

I think one reason to avoid this approach is because SQL and other DB languages are pretty terrible (compared to popular languages like C#, Python, etc...) But why has no one written a great DB language yet?


I've worked on a service like this. 800k lines of PL/SQL and Java stored procesures (running inside the database, so you could call Java from PL/SQL and vice versa), powered by triggers.

* Testing is god-awful. To test a simple thing you had to know how the whole application worked, because there's validation in triggers, which triggers other triggers, which require things to be in a certain state. This made refactoring really hard/risky so it rarely got done.

* There's a performance ceiling, and when you hit that, you're done. We did hit a ceiling, did months of performance tuning, then upgraded to the biggest available box at the time, 96 cores, 2TB ram, which helped, but next time the upgrade won't be big enough. You're limited in what one box can do (and due to stored procedures being tied to the transaction there's limits to what you can do concurrently as well)


I think they go too crazy with the stored procedures. I would rather see constraints and views to make the data work. Triggers are useful for simple behind-the-scenes things like logging, or creating updateable views, but their behavior should be kept simple. If they're sending e-mails and launching missiles, it's probably too much.

Debugging PL/SQL without the PL/SQL debugger is nearly impossible. Unfortunately a lot of shops cheap out on developer tools after they buy the server licenses. I never liked the idea of Java on the database. The good thing about PL/SQL is that nobody wants to write it so it has a tendency to not be overused by most developers.

The performance ceiling probably wouldn't be too low if there weren't too much extra activity with each update. As with all databases, sharding and replication are your friend.


DBAs were a bottleneck. In the best case, you’d throw your application’s data needs over the wall to the DBAs (remember, this was the waterfall era) and hope they’d update the schema for you in a timely manner. In the worst case, the DBAs were petty tyrants who stifled progress. In the worser case, they were incompetent and you ended up with all these applications directly reading and writing the database, making schema changes impossible and coupling applications in obtuse, incredibly difficult to fix ways.

In any case, designing a single schema that encompassed all the needs of the organization and could grow and change as the organization did was nearly always too much to ask.

This was in the days of enterprise data modeling where people believed there really was just one data model or object model that could represent the whole org, independent of the needs of any given application. I don’t think anybody believes that any more.


Probably for the same reasons that waterfall development doesn't work out. This approach requires up-front specification of the data before the application domain is well understood. Any application desiring a migration to a different schema would need to work with the others to do it. Finally, developers bristle at the idea that they have to wait for another team to get their job done. In the absence of a strong company policy, it will inevitably drift toward shipping an application rather than keeping everything together.

Perhaps refactoring, had it been better understood around 1970, could have gone a long way toward harmonizing diverse schemas, allowing experimentation with eventual refactoring into the common database.

Our current environment makes this impossible. There's no way that Salesforce is going to ship a version that works with your company's database schema. You're going to have to supply that replication yourself. Same for Quickbooks. To get that kind of customization you need to be spending hundreds of thousands for enterprise software.


I didn't make this clear in my original comment, but I was envisioning that an app like Salesforce would use its own schema, but still live in the same DB with other schemas. I'm going to assume that Salesforce has some notion of an Employee. Salesforce would use its own Employee table by default, but it would provide extension points (views and SPs) that would allow you to read and write Employee data from tables in another schema if you want. This might be preferable to duplicating Employee data.

Edit - I just saw that you addressed this in your original post: "nobody offered applications that could be configured to an existing data base"


> I think one reason to avoid this approach is because SQL and other DB languages are pretty terrible (compared to popular languages like C#, Python, etc...) But why has no one written a great DB language yet?

SQL is actually hard to compare to programming languages, because in SQL you say what you want, while in iterative language you say how. I only know one language that was competing with SQL (and lost) it is QUEL (originally it was used by Ingress and Postgres).

BTW for triggers and stored procedures you actually can use traditional language, I know that PostgreSQL supports Python, you just need to load a proper extension to enable it.


For workloads which are read-heavy, I think a single database can be a great solution -- a small monolith has exclusive write access to the db, and any number of polyglot read-only services are connected to as many read-replicas as are needed for horizontal scaling.

For write-heavy workloads, best of luck to you :)


> I think one reason to avoid this approach is because SQL and other DB languages are pretty terrible

Where did you get that from?


SQL is actually amazing at some things that are very difficult, verbose, or slow to do in C# or Python.


I disagree with the conclusion. While every situation is unique, the default should be separate persistence layers for analytics and transactions.

Analytics has very different workloads and use cases than production transactions. Data is WORM, latency and uptime SLAs are looser, throughput and durability SLAs are tighter, access is columnar, consistency requirements are different, demand is lumpy, and security policies are different. Running analytics against the same database used for customer facing transactions just doesn't make sense. Do you really want to spike your client response times every time BI runs their daily report?

The biggest downside to keeping analytics data separate from transactions is the need to duplicate the data. But storage costs are dirt cheap. Without forethought you can also run into thorny questions when the sources diverge. But as long as you plan a clear policy about the canonical source of truth, this won't become an issue.

With that architecture, analysts don't have to feel constrained about decisions that engineering is making without their input. They're free to store their version of the data in whatever way best suits their work flow. The only time they need to interface with engineering is to ingest the data either from a delta stream in the transaction layer and/or duplexing the incoming data upstream. Keeping interfaces small is a core principle of best engineering practices.


In my last job I was a DevOps guy in a Data Eng. team and we used microservice (actually serverless) extensively to the point that none of our ETL relied on servers (they were all serverless; AWS lambda).

Now databases themselves are different stories, they are the persistence/data layer that microservices themselves use . But it's actually doable and I'd even say much easier to use microservices/serverless for ETL because it's easier to develop CI/CD and testing/deployment with non-stateful services. Of course, it does take certain level of engineering maturity and skillsets but I think the end results justify it.


Is there a book or course to buy toward getting an understanding of the more philosophical concepts underneath your approach?


This isn’t a new problem to microservices though, although maybe it’s amplified. Reporting was challenging before microservices became popular too with data from different sources. Different products, offline data sources etc that all had to be put together. The whole ETL, data warehousing stuff.

In the end everything involves tradeoffs. If you need to partition your data to scale, or for some other reason need to break up the data, then reporting potentially becomes a secondary concern. In this case maybe delayed reporting or a more complex reporting workflow is worth the trade off.


+1, Informatica was founded in 1993. Teradata in 1979. These are not new problems.

DataWarehousing has drastically improved recently with the separation of Storage & Compute. A single analyst's query impacting the entire Data Warehouse is a problem that will in the next few years be something of the past.


Microservices are primarily about silo'ing different engineering teams from eachother. If you have a singular reporting database that a singular engineering team manages I'm not sure its a big deal. Reporting might be a "monolith" but the system as a whole isn't. Teams can still deploy their services and change their database schemas without stepping on eachother's toes.


> Teams can still deploy their services and change their database schemas

No, because as soon as you change your schema, you have to plan ahead with the reporting team for starters. The reports still have to be able to work the same way, which means the data needs to be in the same format, or else the reports need to be rewritten to handle the changes in tables/structures.


That's no different than changing your API specification or refactoring your code or anything else. Ideally the entrypoints into your team's services should be considered hard contracts to the outside world and the rest of the organization. Changing the contract out from under your customer is not something that should ever be easy.

IMO the devops folks should define some standard containers that include facilities for reporting on low-level metrics. Most of the monitoring above that should be managed by the microservice owner. The messages that are consumed for BI and external reporting should not have breaking changes any more than the APIs you provide your clients should.


"That's no different than changing your API specification"

This is a great point. The way to make backward-compatible changes to an API is by adding additional (JSON) keys, not changing / removing keys. The same approach works for a DB -- adding a new column doesn't break existing reporting queries.


Isn't that a weakness of any data dependency? I could push a new service that deprecates a field and supplies a replacement. I still have to communicate it and get downstream consumers to migrate. Or is the problem that reporting teams are looking at the low-level implementation details, rather than some stable higher-level interface? (I don't know how to avoid that when you're just pulling someone else's database directly or indirectly.)


No one has solved that problem and it sucks, and what ends up happening is you end up again porting that data from those disparate SQL and NoSQL databases either to a warehouse which is RDBMS or you put it into a datalake. That's again possible if you somehow manage to find all the drivers. You're doubly screwed if you have a hybrid - cloud and on-prem setup.


>> No one has solved that problem (...) you end up again porting that data from those disparate SQL and NoSQL databases either to a warehouse

That's exactly how that problem has been solved successfully for the past 20 years.


The high latency between operational data and that data being reflected in reporting using traditional data warehouse pipelines makes it difficult for companies to make effective business decisions in a fast-paced business environment. Even in competent execution, that latency is frequently measured in weeks for big businesses. In the last few years, I've been approached by a number of traditional big businesses looking to rearchitect their database systems to make them more monolithic for the express purpose of reducing end-to-end latency in support of operational decisions.

It is extremely expensive and slow to move all the data to a data warehouse. Ignoring the cost element, the latency from when data shows up in the operational environment to when it is reflected in the output of the data warehouse is often unacceptably high for many business use cases. A large percentage of that latency is attributable solely to what is essentially complex data motion between systems.


It is not necessarily the case that data warehouses have high update latency. Open source streaming tech (e.g. Apache Kafka, Beam, etcetera) can be used to build an OLAP database updated in near real-time.


Sure, I've done this many times myself. The big caveat is that this only works if your data/update velocity is relatively low. I've seen many operational data models where the data warehouse would never be able to keep up with the operational data flow. Due to the rapidly increasing data intensity of business, there is a clear trend where this latter situation will eventually become the norm. I've already seen instances at multiple companies where Kafka-based data warehouse pipelines are being replaced with monolith architectures because the velocity of the operational system is too high.

For it to work, online OLAP write throughput needs to scale like operational databases. This is not the case in practice, so operational databases can scale to a point where the OLAP system can't keep up. The technical solution is to scale the operational system to absorb the extra workload created by the data warehousing applications, but current database architectures are not really designed for it so it isn't trivial to do.


Our data is near real time. But I can see how latency can be an issue based on size of the data and transformations are needed on the data before it hits the DW. But there are solutions - kafka being one.


Well, it's debatable if that is "solving" the problem or just hiding or mitigating it.


This phrase was new to me: https://en.wikipedia.org/wiki/Data_lake


This is what Kafka is for. You put Kafka on top of your database to expose data and events. Now BI can take the events put them into their system as they want.


> You put Kafka on top of your database to expose data and events. Now BI can take the events put them into their system as they want.

Right, that's the data warehouse method that I described. "Put them into their system" is a lot harder work than just typing in "put Kafka on top of your database."


Things like Maxwell[0] help a lot with the getting stuff into Kafka from the DB, but I agree with your point entirely. Kafka is not something I’d recommend unless you’ve got a team dedicated to maintaining it. Thankfully Confluent is rising up to fill that need and reasonably priced. Confluent also has an offering for streaming binlogs into Kafka too.

[0] http://maxwells-daemon.io/


Worth pointing out that running Kafka is itself no small thing. So now you've added BI and also Kafka, which itself requires a bundle of services to operate. And you have to keep BI and Kafka in sync with all your various other data stores' schemas, and with each other.

Which all gets back to the point of the OP.


Shameless plug: feeding data from operational databases to DWH is one main use case of change data capture as e.g. implemented by Debezium (https://debezium.io/) for MySQL, Postgres, MongoDB, and more, using Apache Kafka as the messaging layer.

Disclaimer: working on Debezium


Data Warehouses are (usually) not optimized for individual inserts, updates, deletes (DMLs). Loading data into DWHs is usually done through copy/ingestion commands where aggregated/batched data is copied from blob storage (s3, azure, etc...) to a staging table in the DWH.

In a non-append-only scenario, Debezium tracks each source operation (insert, update, delete) from the replication log (oplog, binlog, etc) as an individual operation that's emitted into a kafka topic. How does one efficiently replicate this to a Data Warehouse in an efficient manner?

I have not been able to use Debezium as way to replicate to a Data Warehouse for this very reason. At least not without having to resort to very complicated data warehouse merge strategies.

Note, there exist Data Warehouses that allow tables to be created in either OLAP or OLTP flavor. I understand that Debezium could easily replicate to an OLTP staging table. But are there any solutions if this isn't an option?


Curious how support is for commercial databases like SQL Server and Oracle? In the larger, older, enterprises these databases are rampant...


> You put Kafka on top of your database

Your casual tone is at odds with what I've seen when teams run Kafka clusters in production. Not a decision I would take so lightly.


Without tone of voice over text, unsure if the suggestion is in sincerity or sarcasm.

The article talks about micro services being split up due to fad as opposed to deliberate, researched reasons. Putting Kafka over the database also makes the data distributed when in most cases, it’s not necessary!


This is a solved problem, "Data Engineering" teams solve this by building a data pipeline. It's not for all orgs, but for a large org, this is worth doing right.


"data pipeline" is just the new trendy phrase for ETL, which the GP mentioned. just because a solution exists, does not make it a solved problem. it's not the right solution for everyone


no it's not.

ETL is really database focused and batch focused , Extract, Transform, Load.

Data pipeline, is a combination of streams and batch. For example, you can implement a Data Capture using something like https://debezium.io/

Here's how Netflix solves it https://netflixtechblog.com/dblog-a-generic-change-data-capt...

Overview

"Change-Data-Capture (CDC) allows capturing committed changes from a database in real-time and propagating those changes to downstream consumers [1][2]. CDC is becoming increasingly popular for use cases that require keeping multiple heterogeneous datastores in sync (like MySQL and ElasticSearch) and addresses challenges that exist with traditional techniques like dual-writes and distributed transactions [3][4]."


What your saying applys more to ELT, which is an efficient method when using batches. There's no reason why you can't stream ETL.


> It's not for all orgs

Then it doesn't seem to be solved. Seems like teams operating at a lean scale would have an issue with this, especially teams with lopsided business:engineering ratios


The remark wasn't that there weren't solutions, but that it is still a problem that needs a solution. Storing all data within a single database is a much simpler way to get started if you want to run analytic queries. You spin up a read slave that is tuned for expensive queries, rather than OLAP ones.


I don't think you're right back to a monolith with centralized reporting. Remember, microservices doesn't mean JSON-RPC over HTTP. Passing updates extracted via change data capture and forwarding them to another reporting system is a perfectly viable interface. Data duplication is also an acceptable consequence in this design.


> Passing updates extracted via change data capture and forwarding them to another reporting system is a perfectly viable interface.

Right, that's the data warehouse method I described, keeping a central database in a reporting system. But now you just have to keep that database schema stable, because folks are going to write reports against it. It's a monolith for reporting, and its schema is going to be affected by changes in upstream systems. It's not like source systems can just totally refactor tables without considering how the data downstream is going to be affected. When Ms. CEO's report breaks, bad things happen.


Sorry, I noticed you made the same observation in another thread after you got dogpiled by everyone; I left my question over there. Yeah, I've had to contend with that problem before -- hard to imagine how I forgot about it :)


You just subscribe a reporting service to certain topics , done .


https://prestosql.io/

It can access all those different databases.

You can also make your own connectors that make your services appear as tables, which you can query with SQL in the normal way.

So if the new accounts micro-service doesn't have a database, or the team won't let your analysts access the database behind it, you can always go in through the front-door e.g. the rest/graphql/grpc/thrift/buzzword api it exposes, and treat it as just another table!

Presto is great even for monoliths ;) Rah rah presto.


It's been about 4 years since I've been in this world, but I remember there being several products all doing a very similar thing: Presto, Hive, SparkSQL, Impala, perhaps some more I'm forgetting. Is the situation still the same? Or has Presto "won out" in any sense?


Hive and Impala are databases.

Presto and SparkSQL are SQL interfaces to many different datasources, including Hive and Impala, but also any SQL database such as Postgres/Redis/etc, and many other types of databases, such as Cassandra and Redis; the SQL tools can query all these different types of databases with a unified SQL interface, and even do joins across them.

The difference between Presto and SparkSQL is that Presto is run on a multi-tenant cluster with automatic resource allocation. SparkSQL jobs tend to have to be allocated with a specific resource allocation ahead of time. This makes Presto is (in my experience) a little more user-friendly. On the other hand, SparkSQL has better support for writing data to different datasources, whereas Presto pretty much only supports collecting results from a client or writing data into Hive.


I think some of this might be misinformed.

I know Hive can definitely query other datasources like traditional SQL databases, redis, cassandra, hbase, elasticsearch, etc, etc. I thought Impala had some bit of support for this as well, though I'm less familiar with it.

And SparkSQL can be run on a multi-tenant cluster with automatic resource allocation - Mesos, YARN, or Kubernetes.


Do issues arise when trying to match types across different persistence layers?


In practice, no, haven't hit any.


Presto can't deal with ElasticSearch. It can query basic data but it's not optimized to translate SQL to native ES query (SQL ES query is for paying customer).


Where I work we use micro-services through lambda (we have dozens of them) and use DynamoDB for our tables. DynamoDB streams are piped through elasticsearch. We use it for our business intelligence. Took us about a week to setup proper replication and sharding. I don't have a strong opinion on monolith or micro-service, pick one or the other, understand their culprit and write high quality (aka. simple and maintainable) code.


Is there a way to do this for a relational database at scale?


We used to do that but dynamodb being very dynamic (which we like) makes it harder to use a declarative schema. ES auto detect the schema which makes it a breeze. I’d say try with JSON fields but I don’t think you’ll get great query speed


You can do change data capture with Debezium.

Recently the Netflix engineering blog mentioned a tool called DBLog too, but I don’t believe they’ve released it yet.


But I’d argue Monoliths don’t have anything inherent to them which makes reporting easier. A proper BI setup requires a lot of hard work no matter how the backend services are built.


> But I’d argue Monoliths don’t have anything inherent to them which makes reporting easier.

It's easier to join tables in databases that live on a single server, in a single database platform, than it is to connect to lots of different data sources that live in different servers, possibly even in different locations (like cloud vs on-premises, and hybrids.)


> It's easier to join tables in databases that live on a single server, in a single database platform

1) what if your data set is larger than you can practically handle in a single DB instance?

2) nothing about a monolith implies you have a single data platform, let alone a single DB instance


> 1) what if your data set is larger than you can practically handle in a single DB instance?

I have clients at 10TB-100TB in a single SQL Server. It ain't a fun place to be, but it's doable.


Monoliths have an advantage up until the point where you have to shard the database in some way.


Whether or not your system intentionally ended up with this architecture, GraphQL provides a unified way of fetching the data. You'll still have to implement the details of how to fetch from the various services, but it gives people who just want read access to everything a clean abstraction for doing so.


REST can do the exact same thing.


With less flexibility, and more round trips.


I am not sure about that. Both RESTful APIs and GraphQL APIs need to be designed and flexibility is basicaly a measure of how good your API design is. I don't see how GraphQL is intrinsically more flexible.


> With less flexibility, and more round trips

That all depends on the API.


Sure - but if you ever want it to be as flexible as GQL, you need to implement...a query language?


It's nice to have a clean abstraction, but if the various services are on tables in different databases, things like joins get much more expensive, no? Performance is important.


Build a reporting database that maintains a copy of data from other data stores.

Simple enough. Surely you wouldn't run analytics directly on your prod serving database, and risk a bad query taking down your whole system?


There are many business environments where copying the data is effectively intractable so you need to run all of your data model operations in a single instance. More data models have this property with each passing year, largely because copying data is very slow and very expensive.

This is not a big deal if your system is designed for it, sophisticated database engines have good control mechanisms for ensuring that heavy reporting or analytical jobs minimally impact concurrent operational workload.


I'd probably reach for setting up a read replica (trivial) before completely overhauling my architecture.


That's the data warehouse option.


> Surely you wouldn't run analytics directly on your prod serving database, and risk a bad query taking down your whole system?

Uhh, yep, that's exactly how a lot of businesses work.

There are defenses at the database layer. For example, in Microsoft SQL Server, we've got Resource Governor which lets you cap how much CPU/memory/etc that any one department or user can get.


That's not a good idea. The usage patterns for a production database and one that runs reporting are very different. Reporting has long running queries with complex joins, production has many parallel short queries. If you start mixing the two, you can no longer reliably tune your database. For example you would want the alert for slow queries set set to a different time out on production and on reporting.

Also, you complicate locking down access to the database. A reporting database can typically contain less sensitive info, reporting would not have password (hashes) for user accounts for example.


I don't think Brent was necessarily saying it was a good idea, just something that is commonly seen. At my own company we try hard to get customers to run reports against replicas/extracts rather than production, but some customers insist that they absolutely _need_ to pull in live data from production. So they run complex reports against a production OLTP schema and wonder why the system is running slow...


> I don't think Brent was necessarily saying it was a good idea

Reading other comments Brent’s made here, I’m not so sure.


> Reading other comments Brent’s made here, I’m not so sure.

No no, it's not a good idea, but it's just the reality that I usually have to deal with. I wish I could lay down rules like "no analyst ever gets the rights to query the database directly," but all it takes is one analyst to be buddy-buddy with the company owner, and produce a few reports that have really high business value, and then next thing you know, that analyst has sysadmin rights to every database, and anybody who tries to be a barrier to that analyst's success is "slowing the business down."


Pretty trival to setup read replicas using log shipping in MSSQL and postgresql to offload analytical loads to secondary servers.

I work on a monolith that does this, but its usually not even necessary a single db server on modern hardware with proper resource governing can handle quite a bit.


> Uhh, yep, that's exactly how a lot of businesses work.

Just because a lot of businesses do it, doesn’t mean it’s a good idea. A lot of businesses don’t do source control of any kind, so should we all do that too?


is that a bad thing? I don't think anyone is against a reporting monolith. to me, being able to silo data in the appropriate stores for their read / write patterns, and still query it all in a single columnar lake seems like a feature, not a bug to be solved


> now they're stumped as to how they're supposed to quickly get data out

I'd argue that (given a large enough business) "reporting" ought to be its own own software unit (code, database, etc.) which is responsible for taking in data-flows from other services and storing them into whatever form happens to be best for the needs of report-runners. Rather than wandering auditor-sysadmins, they're mostly customers of another system.

When it comes to a complicated ecosystem of many idiosyncratic services, this article may be handy: "The Log: What every software engineer should know about real-time data's unifying abstraction"

[0] https://engineering.linkedin.com/distributed-systems/log-wha...


Reporting on databases is a rather 90's thing to do. Why would you still actively pursue such a reporting method from the OLTP/OLAP world? If you use tools that are purposely made to work in such a way, an analyst using those tools will obviously not be able to utilise them in an incompatible environment.


Or you could have CQRS projectors (read models), which solve exactly this - they aggregate data from lots of different eventually consistent sources, providing you with locally consistent view only of events you might be interested in.

It will lag behind by some extent, roughly equal to the processing delay + double network delay, but can include arbitrary things that are part of your event model.

Though, it's not a silver bullet (distributed constraints are pain in the ass yet), and if system wasn't designed as DDD/CQRS system from the ground up, it would be hard to migrate it, especially because you can't make small steps toward it.


Isn't this the whole point of a data lake?


> Isn't this the whole point of a data lake?

Yes, but data lakes don't fill themselves. Each team has to be responsible for exporting every transaction to the lake, either in real time or delayed, and then the reporting systems have to be able to combine the different sources/formats. If each microservice team expects to be able to change their formats in the data lake willy-nilly, bam, there breaks the report again.


This problem exists in monolithic data stores and codebases too. It's not as if independent teams have absolute sovereignty over their table schemas.

Schemas evolve as the needs of the product change, and that evolution will always outpace the way the business looks at the data.

The best way I've seen to deal with this is to handle this at report query-time (e.g. pick a platform that can effectively handle the necessary transformations at query-time, rather than at load-time).


You can't entirely escape this problem. Even when companies want to fit everything in a single database, they often find they can't. With enough data you'll eventually run into scalability limitations.

A quick fix might be to split different customers onto different databases, which doesn't require too many changes to the app. But now you're stuck building tools to pull from different databases to generate reports, even though you have a monolithic code base.


I've seen that BigQuery has federated querying, so you can build queries on data in BigQuery, BigTable, Cloud SQL, Cloud Storage (Avro, Parquet, ORC, JSON and CSV formats) and Google Drives (CSV, Newlinen-delimited JSON, Avro or Google Sheets)


Even if you have a monolith, you’re still going to have multiple sources that you want to report on. Even in an incredibly simple monolith I could imagine you’d have: your app data, Salesforce, Google Analytics. Having an ELT > data warehouse pipeline isn’t difficult, and what reporting use case is undermined by the data being a few minutes old?


Off-topic, but I knew I had seen that name somewhere: https://dbareactions.com/post/183007001237/devops-owns-the-q...

(SRE here, but I work on databases as well all day)


> Reporting on a bunch of different databases is a hard nut to crack.

Maybe, but your business analyst already needs to connect to N other databases/data-sources anyway (marketing data, web analytics, salesforce, etc, etc), so you already need the infrastructure to connect to N data sources. N+1 isn't much worse.


This is a problem but I’m not sure having everything in a single data store is a great idea either. Generally you want your analytics separate from your operations anyway. We do this by having a central ES instance which just informs on the data it needs, which had worked perfectly fine for our needs


> Reporting on a bunch of different databases is a hard nut to crack.

It's not necessarily a bad idea though :-/


I thought that's what solutions like Calcite[1] were for: running queries across disparate data sources. Yes, you still need to have adapters for each source to normalize access semantics. No, not all sources will have the same schema. But if you're trying to combine Postgres and DynamoDB into a single query, you would narrow your view to something that exists in both places, e.g. customer keys, meta data, etc.

Maybe I'm wrong.

[1] https://calcite.apache.org/


Calcite is only a SQL parser and planner. It doesn't execute anything and is meant to be a component in a larger database system.

You need to use something like Spark, Presto or Drill if you want to run queries across different data sources.


It seems like interacting with customers and enforcing business rules is one job, and observing what's happening is a different concern. Observing means collecting a lot of logs to a reporting database.


just recreate the database you broke apart into something less flexible


Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: