Hacker News new | past | comments | ask | show | jobs | submit login

I don't think I blame the author at all. I'm not sure why you would start with microservices, unless you wanted to show that you could build a microservices application. Monoliths are quicker and easier to setup when you're talking about a small service in the first place.

It's when an organization grows and the software grows and the monolith starts to get unwieldy that it makes sense to go to microservices. It's then that the advantage of microservices both at the engineering and organizational level really helps.

A team of three engineers orchestrating 25 microservices sounds insane to me. A team of of thirty turning one monolith into 10 microservices and splitting into 10 teams of three, each responsible for maintaining one service, is the scenario you want for microservices.






We’ve done exactly this - turned a team of 15 engineers from managing one giant monolith to two teams managing about 10 or so microservices (docker + kubernetes, OpenAPI + light4j framework).

Even though we are in the early stages of redesign, I’m already seeing some drawbacks and challenges that just didn’t exist before:

- Performance. Each of the services talks to the other service via well-defined JSON interface (OpenAPI/Swagger yaml definitions). This sounds good in theory, but parsing JSON and then serializing it N times has a real performance cost. In a giant “monolith” (in the Java world) EJB talked to each other, which despite being java-only (in practice), was relatively fast, and could work across web app containers. In hindsight, it was probably a bad decision to JSON-ize all the things (maybe another protocol?)

- Management of 10-ish repositories and build jobs. We have Jenkins for our semi-automatic CI. We also have our microservices in a hierarchy, all depending on a common parent microservice. So naturally, branching, building and testing across all these different microservices is difficult. Imagine having to roll back a commit, then having to find the equivalent commit in the two other parent services, then rolling back the horizontal services to the equivalent commit, some with different commit hooks tied to different JIRA boards. Not fun.

- Authentication/Authorization also becomes challenging since every microservice needs to be auth-aware.

As I said we are still early in this, so it is hard to say if we reduced our footprint/increased productivity in a measurable way, but at least I can identify the pitfalls at this point.


The first thing I start trying to convince people to ditch on any internal service is JSON.

There are only two times when JSON is a particularly good choice: when it's important for the messages themselves to be human-readable, or when it's important to be able to consume it from JavaScript without using a library. Any other time, something like protocol buffers is going to give you lower latency, lower bandwidth requirements, lower CPU costs, lower development effort, less need for maintaining documentation, and better standardization.

If you ditch the HTTP stuff while you're at it, you can also handily circumvent all the ambiguities and inter-developer holy wars that are all but inherent to the process of taking your service's semantics, whatever they are, and trying to shoehorn them into a 30-year-old protocol that was really only meant to be used for transferring hypermedia documents. Instead you get to design your own protocol that meets your own needs. Which, if you're already building on top of something like protobuf, will probably end up being a much simpler and easier-to-use protocol than HTTP.


Not to mention, JSON APIs start out simple because you just call `to_json` on whatever object you need to share and then move on. Except nobody’s ever documented the format, and handling ends up taking place in a million different places because you can just pass the relevant bits around. The API is “whatever it did before we split it up”.

Now when someone goes to replace one side, it’s often impossible to even figure out a full definition of the structure of the data, much less the semantics. You watch a handful of data come across the pipe, build a replacement that handles everything you’ve seen, and then spend the next few months playing whack-a-mole fixing bugs when data doesn’t conform to your structural or semantic expectations.

JSON lets you get away with never really specifying your APIs, and your APIs often devolve to garbage by default. Then it becomes wildly difficult to ever replace any of the parts. JSON for internal service APIs is unmitigated evil.


You could just make a shared object module that is versioned and have all the microservices use those objects to communicate. Or you could implement avro schemas. Many ways around this issue.

I don't know if I would recommend NOT using json when starting out (I'm on the side of pushing for a monolith for as long as possible), but yes omg I've been there.

I've migrated multiple services out of our monolith into our micro service architecture and oh boy, it is just impossible to know (or find someone who knows) what structure is passed around, or what key is actually being used or not. Good luck logging everything and pulling your hair documenting everything from the bottom up.


> Except nobody’s ever documented the format,

That's hardly a JSON problem. You still experience that problem if you adopt any undocumented document format or schema.


I think the point the OP was making was more about the tooling than the format. Having a library that forces you to define the format you're using, instead of dumping to_json whatever internal representation you had, also doubles as documentation.

But then, of course, that can be considered boilerplate code (and, in the beginning and most of the time, it actually is just a duplication of your internal object structure).


That, and having the format definition file support comments means that you have a convenient all-in-one place where, at lest in the more straightforward cases, you can handily describe both the message format and the service's behavior in a single file that's easy to share among all your teams.

Precisely.

The easiest path with JSON is to do none of this, and so the majority of teams (particularly inexperienced ones) do none of it. With protos, someone must at least sit down and authoritatively outline the structure of any data being passed around, so at a minimum you’ll have that.

But even just forcing developers to do this generally means they start thinking about the API, and have a much higher chance of documenting the semantics and cleaning up parts that might otherwise have been unclear, confusing, or overly complex.


I’m surprised that bencoded dictionaries haven’t ever become popular outside of bittorrent. They are expressive, fast, and easy to implement.

Man if you think protobuf is fast, just wait until you try C pod structs with type-length headers.

They are literally infinitely faster to encode/decode than protobuf.

They even have the same obnoxious append-only extensibility of protobuf if that’s what really gets your jimmies firing.


FlatBuffers and then Endian concerns are gone.

Are FlatBuffers essentially Capnproto without the RPC?

FlatBuffers can be used with gRPC, so they shouldn't need a separate RPC ecosystem like Cap'n Proto: https://grpc.io/blog/flatbuffers

Right, but I'm asking if FlatBuffers' approach to encoding is similar to Cap'n Proto. It seems like the answer is "yes", but I might be missing something.

They are both zero-copy, but the designs have a number of differences. I compared them back in 2014: https://capnproto.org/news/2014-06-17-capnproto-flatbuffers-...

Disclaimers:

1) I'm the author of Cap'n Proto; I'm probably biased.

2) A lot could have changed since 2014. (Though, obviously, serialization formats need to be backwards-compatible which limits the amount they can change...)


Cool, thanks for this! I'm happily using Cap'n Proto in a side project, and so far have really enjoyed working with it. It's a really impressive piece of engineering.

asn1 for the win

It's a shame you're catching downvotes for mentioning ASN1. It's so much better than all these newfangled things that it makes them look like toys.

I'm not sure why ASN.1 isn't more popular for a zero-copy encoding system

What serialization format do you all recommend when starting out? Protobufs seems to be the go-to, but what about Cap'n Proto etc?

(personal opinion, not speaking on behalf of my employer)

Start out with protobufs, so you can take advantage of gRPC[1] and all of its libraries and all the other tooling that is out there.

If you profile and determine that serialization/deserialization is actually a bottleneck in your system in a way that is product-relevant or has non-trivial resource costs, then you can look at migrating to FlatBuffers, which can still use gRPC[2].

[1] https://grpc.io/ [2] https://grpc.io/blog/flatbuffers


Ah, good tip, thanks!

I'd agree on starting with protobufs, precisely because it's the go-to. Other options have plenty of advantages, but none are as widely supported.

> So naturally, branching, building and testing across all these different microservices is difficult. Imagine having to roll back a commit, then having to find the equivalent commit in the two other parent services, then rolling back the horizontal services to the equivalent commit

that should not happen. if it does you don't have a microservice architecture, you have a spaghetti service architecture.


How does one know if X is the wrong solution or if X is the right solution but the shop is "doing X wrong"? This also applies to monoliths: maybe they can scale, but one is doing them wrong. Changing from doing monoliths wrong to doing microservices wrong is obviously not progress.

The same issue appeared when OOP was fairly new: people started using it heavily and ended up making messes. They were then told that they were "doing it wrong". OOP was initially sold as magic reusable Lego blocks that automatically create nice modularity. There was even an OOP magazine cover showing just that: a coder using magic Legos that farted kaleidoscopic glitter. Microservices is making similar promises.

It took a while to learn where and how to use OOP and also when not to: it sucks at some things.


Simple:

If X is a technology I don't like, and it's not working for you, then it's the wrong solution.

If X is a technology I don't like, and it is working for you, then you simply haven't scaled enough to understand its limitations.

If X is a technology I like, but it's not working for you, then your shop is "doing X wrong".

If X is a technology I like, and it's working for you, then it's the right solution and we're both very clever.


edit: See below

Uh, my post was snark towards engineers' general tendency to champion and defend their own favourite architecture over competing approaches rather than focusing on the most suitable architecture for the circumstance, in response to this question:

> How does one know if X is the wrong solution or if X is the right solution but the shop is "doing X wrong"?

Edit: And a civil conversation ensued. :)


Sorry, I just misread your post as being more of an attack. As I had said, I dislike the constant "hype means something is bad" posts that I've been seeing for years - I think it's really unfortunate.

No worries, I can see how it might have been read that way in the wider context of the conversation. And I totally hear you on the "hype equals bad" thing. If something's popular, it's popular for a reason. That reason MIGHT just be trendoids jumping on a hype train, but it might also be because the thing is good.

If you want to learn "the right way" I highly recommend "Building Microservices: Designing Fine-Grained Systems" by Sam Newman[1]

[1] https://smile.amazon.com/Building-Microservices-Designing-Fi...


I have posted about this before. OP is describing nano-services, not micro-services. Nano-service is micro-service that provides leftpad via JSON api.

You have an auth.yourapp.com and api.yourapp.com and maybe tracer.yourapp.com and those three things are not a single app that behaves like auth, api or tracer depending on setting of a NODE_ENV variable? If so, you have micro services.


It sounds like you're paraphrasing the great Steven Jobs: "You're holding it wrong"

Be that as it may, I believe mirkules's issue is not an uncommon one. Perhaps saying "building a microservice architecture 'the right way' is a complex and subtle challenge" would capture a bit of what both of you are saying.

Something being complex and therefor easy to mess up does not mean it's a great system and the users are dumb, especially if there are other (less complicated, less easy to mess up) ways to complete the task.


> Perhaps saying "building a microservice architecture 'the right way' is a complex and subtle challenge" would capture a bit of what both of you are saying.

Supporting API versioning is not a complex or subtle challenge. It's actually a very basic requirement of a microservice architecture. It's like having to test null pointers: it requires additional work but it's still a very basic requirement if you don't want your app to blow up in your face.


> that should not happen. if it does you don't have a microservice architecture, you have a spaghetti service architecture.

A "service" is not defined principally by a code repository or communications-channel boundary (though it should have the latter and may have the former), but by a coupling boundary.

OTOH, maintaining a coupling boundary can have non-obvious pitfalls; e.g., supported message format versions can become a source of coupling--if you roll back a service to the version that can't send message format v3, but a consumer of the messages requires v3 and no longer supports consuming v2, then you have a problem.


you should never actually remove an api without a deprecation process. basically a message format is part of your api. actually in a microservice world your interfaces/api should be as stable as possible. if something gets changed, it needs to go trough a deprecation, that means not removing it until a certain amount of time.

private apis exist for a reason. going through the deprecation cycle repeatedly sounds like a waste of time if you control all sides of the system. which is to say, not every interface needs to be a service boundary. you do need to keep versioned apis, but there is a cost to doing so.

IMHO, part of a successful microservices program is treating your "private" APIs like your public ones, with SLAs, deprecation protocols, etc. Otherwise, you end up with tight coupling to the point that you should just go with a monolith.

When you are still figuring out the problem space and message formats are being created removed extended and reverted several times a sprint, and you have already gone live with third parties depending on stable interfaces so that you cannot make the changes you need in a timely fashion, the warm arms of a purpose built monolith look particularly attractive.

Indeed. At that early stage of development, a monolith (or at least monolith-style practices, such as tight coupling between API providers and consumers) is definitely simpler and more efficient. But it wouldn't hurt to take steps from the beginning to make it easier to break the monolith apart when/if it becomes pragmatic to do so.

> private apis exist for a reason. going through the deprecation cycle repeatedly sounds like a waste of time if you control all sides of the system. which is to say, not every interface needs to be a service boundary.

The whole point of a microservice is to create a service boundary. If you have a private interface where both sides are maintained by the same team, both sides should be in the same service.


All interfaces are service boundaries. The only difference is whether you controll all clients and servers or not. Either way, API versioning is a must and trivial to implement, and there is really no excuse to avoid it. All it takes is a single integration problem caused by versioning problemd to waste more time than it takes to implememt API versioning.

The accept header could fix this easily if no third parties are involved and if it's a rest like API protocol

The accept header is a mechanism for communicating formats you support; it doesn't do anything to address the problem of managing change to supported versions, which is a dev process issue, not a technical issue.

Feature flags, just enable it.

When all applications are adjusted, the accept-headers request a protobuff format in return.

=> Propagated everywhere except when a js ajax calls happens to the api-gateway.


> The accept header is a mechanism for communicating formats you support; it doesn't do anything to address the problem of managing change to supported versions

That assertion is not true. Media type API versioning is a well established technique.


The accept header could fix this easily if no third parties are involved

Heh, I like that “Spaghetti Microservices”.

You are right. It should not happen. It is difficult to see these pitfalls when unwinding an unwieldy monolith, and, as an organization all you’ve ever done are unwieldy monoliths, that have a gazillion dependencies, interfaces and factories.

We learned from it, and we move on - hopefully, it serves as a warning to others.


I've heard the term 'Distributed Monolith'

I think we should call it angel-hair pasta code. You know, because it's very small spaghetti.

MicroPasta

Tangled angelhair.

No, it's ravioli oriented architecture

To handle deploys and especially rollbacks you need a working CI, or rather CD chain. Where eveything is automated. In there are dependencied in your architecture, all of that needs to be part of the same deploy so you can redo the deploy before that one. With a monolith, you get a simple deploy of all dependencies, as they are baked into one thing. There are down sides of everything baked into one thing. Having all your "micoro" servies deplyed as one package, would make the dependency you have act as before the monolith change. Seeing this kinds of dependency imho even in a monolith are an architechture problem, that in the monolith shows up at code changed and fixes can't be handles by localized fixes, but changed have to go into large parts of the code base.

I think a bigger thing here, is that each deploy should have a REALLY good reason to not be backwards compatible for some pretty long time period. If that requirement is painful for you, then you probably have two pretty tightly coupled services.

Why so many microservices?

Monolith has huge advantage when maybe your code is like 100k lines or below:

1. Easy cross module unit testing/integration testing, thus sharing components is just easier.

2. Single deployment process

3. CR visibility automatically promotes to all parties of interests, assuming the CR process is working as desired.

4. Also, just a personal preference, easier IDE code suggestion. If you went through json serializing/de-serializing across module boundary, type inference/cohesion is just out-of-reach.

And it is not like monolith doesn't have separation of concern at all. After all, monolith can have modules, and submodules. Start abstracting using file system API, but grouping relevant stuff into folders, before put them into different packages. After all, once diverging, it is really hard to go back.

Unless you have a giant team and more than enough engineers to spare for devops. Micrservices can be considered as a organizational premature optimization.


JSON parsing IS expensive. Way more expensive than many people realize. There is actually an almost-underground JSON parser SCENE in the .Net ecosystem where people develop new parser and try to squeeze the maximum performance using all sorts of tricks: https://github.com/neuecc/Utf8Json . Here there is discussion of needing JSON support in .Net core that's faster than Json.Net: https://github.com/dotnet/announcements/issues/90 .

And people say web applications are never CPU bound :)


gRCP is a little bit faster, Google created protobuff which should be easier to migrate to than gRPC, but the protocol is unreadable ( binary)...

JSON has its advantages, I prefer a feature flag when I need performance ( protobuff vs Json), http headers do the rest


gRPC is built on top of protobuf. It’s literally just RPC with protobufs.

one could argue that if serialization/deserialization is the majority of what your api is spending time on then you’ve got distributed monoliths and shouldn’t be making the call in the first place. also ensuring you have the fastest json parser before having a real problem is premature optimization

> JSON parsing IS expensive

Well, maybe

https://news.ycombinator.com/item?id=19214387


"turned a team of 15 engineers from managing one giant monolith to two teams managing about 10 or so microservices"

Knowing absolutely nothing about your product, this sounds like a bad way to split up your monolith.

Probably something like 5 teams of 3 each managing 1 microservice would be a better way to split things up. That way each team is responsible for defining their service's API, and testing and validating the API works correctly including performance requirements. This structure makes it much less likely services will change in a tightly coupled way. Also, each service team must make sure new functionality does not break existing APIs. Which all make it less likely to have to roll back multiple commits across multiple projects.

The performance issues you cite, also seem to indicate you have too many services, because you are crossing service boundaries often, with the associated serialization and deserialization costs. So each service probably should be doing more work per call.

"all depending on a common parent microservice"

This makes you microservices more like a monolith all over again, because a change in the parent can break something in the child, or prevent the child from changing independently of the parent.

Shared libraries I think are a better approach.

"Authentication/Authorization also becomes challenging since every microservice needs to be auth-aware."

Yes, this is a pain. Because security concerns are so important, it is going to add significant overhead to every service to make sure you get it right, no matter what approach you use.


Probably more like do your best to reason about the boundaries of each bounded context and pick an appropriate number of services based off that analysis?

Surely splitting up your application along arbitrary lines based on advice of an internet stranger whose never seen the application and doesn't know the product/business domain just isn't sound way of approaching the problem.


in the end, your architecture will reflect your organigram. (IIRC it's Conway's law?)

You are correct.

Conway's Law is profound. Lately I realized even the physical office layout (if you have one) acts as an input into your architecture via Conway's Law.


We use grpc instead of rest for internal synchronous communication but we've also found that by using event pub/sub between services, that there are not many uses cases where we have direct calls between services.

We used to have a parent maven pom and common libraries but got rid of most of that because it caused too much coupling. Now we create smaller more focused common libraries and favor copy/paste code over reuse to reduce coupling. We also moved a lot of the cross cutting concerns into Envoy so that the services can focus on business functionality.


> favor copy/paste code over reuse to reduce coupling.

This looks like a big step backwards to me.


I would say it depends on the stability of what's being copy/pasted. If it's just boilerplate, it's less concerning.

In my opinion, decoupling should be prioritized over DRYness (within reason). A microservice should be able to live fairly independently from other microservices. While throwing out shared libraries (which can be maintained and distributed independently from services) seems like overkill, it seems much better than having explicit inheritance between microservice projects like the original poster is describing.


Sure, no problem with boilerplate.

For any non trivial code, which needs to be maintained and be kept well tested, to the contrary of the OP, I would favor shared libraries over copy/paste.


How do you handle cases where your client is awaiting a response with a decoupled pub/sub backend? E.g a user creates an account and the client needs to know their user id.

Would that user object be the responsibility of one service, or written to many tables in the system under different services, or...?


For one, you could use something like snowflake IDs so that whatever server receives the user data first can generate and return an id for that user before tossing the data on a queue to be processed.

How would you approach a situation where a client updates a record in service A, and then navigates to a page whose data is returned by service B, which has a denormalized copy of service A's records that hasn't consumed and processed the "UpdatedARecord" event?

Do we accept that sometimes things may be out of sync until they aren't? That can be a jarring user experience. Do we wait on the Service B event until responding to the client request? That seems highly coupled and inefficient.

I'm genuinely confused as to how to solve this, and it's hard to find good practical solutions to problems real apps will have online.

I suppose the front end could be smart enough to know "we haven't received an ack from Service B, make sure that record has a spinner/a processing state on it".


You use eventing only when eventual consistency is acceptable. In your scenario, it sounds like it is not. So then you should use synchronous communication to ensure the expected experience is met. However, that also means that now you can't do stuff with service B without service A being up. So you're trading user experience against resiliency.

Also, you should check your domains and bounded contexts and reevaluate whether A and B are actually different services. They might still legitimately be separate. Just something to check.


Some people advocate that microservices own their data and only provide it through an API. In this scenario, Service B would need to query Service A for the authoritative copy of the record. I think the standard way to deal with the query and network time is, yes, to wait until Service A provides the data and timeout if it takes "too long".

Then your question is about optimizing on top of the usual architecture which hopefully is an infrequent source of pain that is worth the cost of making it faster. I could imagine some clever caching, Service A and Service B both subscribing to a source of events that deal with the data in question, or just combining Service A and B into one component.


I would create the account directly from the initial call and return its ID and then publish an account created message. Any other services could receive the message and perform some action such as send a welcome email or do some analytics.

gRPC and Envoy are exactly the things we are exploring now, although copy/paste would never fly in our org.

I know copy paste is looked down upon, but it's even suggested as a good practice (a little bit) in golang. A little copying is better than a little dependency.

https://go-proverbs.github.io/


Consider larger services than a typical microservice.

For the same reason monoliths tend to split when the organization grows, it is often more manageable to have a small number of services per team (ideally 1, or less).

It's ok if a service owns more than one type of entity.

It's less good if a service owns more than one part of your businesses' domain, however


> Consider larger services than a typical microservice.

People seem to forget that there’s a continuum between monolith and microservices, it’s not one or the other.

Multiple monoliths, “medium”-services, monolith plus microservices, and so on are perfectly workable options that can help transition to microservices (if you ever need to get there at all).


That's fine, but reorganizations happen, teams can grow, and there is an advantage to having things be separate services in cases like this.

Definitely don't just stuff unrelated stuff into a service since a team that normally deals with that service is now working on unrelated stuff. If the unrelated stuff takes off, you now have two teams untangling your monolithic service.

That said, I'm a big fan of medium sized services, the kind of thing that might handle 10 or 20 different entities.


I'm going to go out on a limb here and suggest that parsing (and serializing) JSON is unlikely to be the actual problem, performance-wise. (Although "OpenAPI/Swagger" doesn't fill me with enthusiasm.)

More likely, I suspect, is that either you are shipping way too much data around, you have too much synchrony, or some other problem is being being hidden in the distribution. (I once dealt with an ESB service that took 2.5 seconds to convert an auth token from one format to another. I parallelized the requests, and the time to load a page went from 10 sec to <3; then I yanked the service's code into our app and that dropped to milliseconds.)

Performance problems in large distributed systems are a pain to diagnose and the tools are horrible.


I've been using NewRelic and it's been wondrous around illuminating performance problems.

The whole point of doing microservices is so that you can split up processing responsibility boundaries reasonably, and each team is responsible for being an "expert" in the service it's responsible for.

This also means that each service should have no other services as dependencies, and if they do, you have to many separate services and you should probably look into why hey aren't wrapped up together.

Using a stream from a different service is one thing: You should have clearly defined interfaces for inter-service communication. But if updating a service means you also need to fix an upstream service, your doing it wrong and are actually causing more work than just using a monolith.

EDIT: and because you have clearly defined interfaces, these issues with updating one service and affecting another service literally cannot exist if you've done the rest correctly.


Perhaps a few ideas:

- Performance: use gRPC/protobuf instead of HTTP/OpenAPI, really not much of a reason to use HTTP/OpenAPI for internal endpoints these days

- Repo Management: No one is stopping you from using a monorepo but yourselves :)


Even just defining what “internal communication” means is difficult. We definitely suffer from the what-if syndrome- “what if some day we want to expose this service to a client?”

Our product is a collection of large systems used by many customers with very different requirements - and so we often fall into this configurability trap: “make everything super configurable so that we don’t have to rebuild, and let integration teams customize it”


Ah. In that case, you can expose your gRPC endpoints as traditional JSON/HTTP ones with gRPC-Gateway, which supports generating OpenAPI documentation too! Best of both worlds.

Sounds like you're suffering from not using YNGNI enough: you're not gonna need it. Build what you need now. When that changes, you can change what you built. That was the original intention of Agile methodologies and BDD or TDD. When the tests pass, you're done.

You must be doing something wrong. In a company I work for, we clearly have separated microservices by bounded context, thus making them completely decoupled.

This is the key - decoupled. When you have to rollback commits across multiple service, they are not decoupled, and you're doing something wrong.

Each service should be fully independent, able to be be deployed & rolled-back w/o other services changing.

If you're making API changes, then you have to start talking about API versioning and supporting multiple versions of an API while clients migrate, etc.


>then you have to start talking about API versioning and supporting multiple versions of an API while clients migrate, etc.

Which adds some more complexity that just does not exist in monolythic architecture


Sure it does. Once you have a large monolith, the same coordination problems hit. Just ask the Linux kernel developers: https://lwn.net/Articles/769365/

> This sounds good in theory, but parsing JSON and then serializing it N times has a real performance cost.

It's not just the serialization cost but latency (https://gist.github.com/jboner/2841832) as well, every step of the process adds latency, from accessing the object graph, serializing it, sending it to another process and/or over the network, then building up the object graph again.

The fashion in .net apps used to be to separate service layers from web front ends and slap an SOA (the previous name for micro-services) label on it. I experimented with moving the service to in process and got an instant 3x wall clock improvement on every single page load, we were pissing away 2/3rds of our performance and getting nothing of value from it. And this was int the best case scenario, a reasonably optimized app with binary serialization and only a single boundary crossing per user web request.

Other worse apps I've worked on since had the same anti-pattern but would cross the service boundary dozens/hundreds/thousands of times and very simple pages would take several seconds to load. It's enterprise scale n+1.

If you want to share code like this then make a dll and install it on every machine necessary, you've got to define a strict API either way.


Don't forget:

- Logging. All messages pertaining to a request (or task) should have a unique ID across the entire fleet of services in order to follow the trail while debugging.


"correlation id" is probably the best thing to poke into Google for guidance on it.

I would recommend investigating APM, opentracing and Uber’s jaeger project.

That's really a list of how not to develop microservices, or even software...

Thoughts must obviously be given to protocols. Json is an obvious bad choice for this use case...

The point of microservices is loose coupling, including in the code. Having a code hierarchy negates this and arguably is bad practice in general.


I don't think it's necessarily the best idea to immediately have almost as many services as you have engineers. There are usually more gradual and logical ways to split things up.

> We also have our microservices in a hierarchy, all depending on a common parent microservice.

Can you explain this a bit more? I thought the point was to have each service be as atomic as possible, so that a change to one service does not significantly impact other services in terms of rollbacks/etc.

If I'm wrong here let me know, our company is still early days of figuring out how to get out of the problems presented by monolith (or in our case, mega-monolith).


These are excellent points. I was just having a conversation yesterday about system design and how there seems to be a tipping point after which the transactional/organizational costs of segmenting services outweigh the benefits.

My unscientific impression is that some of the organizational costs - just keeping the teams coordinated and on the same page - can become even more "expensive" than the technical costs.


Protocol buffers exist to solve two problems you list, parsing overhead and backwards compatibility.

I always wondered why ASN.1 never really seemed to take off. Tooling maybe?

Yes - both the protocol and tooling are expensive to support, even by 90s standards. Unless you have to use it for compatibility with a service you can’t fix it’d be much better to start with a better implementation of the concept.

ASN.1 has a number of problems with equivalent parsings (very bad in a security context, has been the source of a number of TLS vulnerabilities), as well as the fact that even discovering if two ASN.1 parsers will give the same result is undecidable.

I agree with all those drawbacks, but auth is something that can be handled with a bit of one time engineering effort. Where I am all traffic to microservices comes through an API gateway which is responsible for routing traffic to the correct service, and more importantly authorising access to those endpoints. Once the gateway has completed auth it places a signed JWT in the Authorisation header, at which point the microservice's responsbility goes from handling the entire auth process to checking the signature can be verified.

> Management of 10-ish repositories and build jobs.

Does each micro service have to live in it's own repository? Especially with a common library everyone uses?


I think having a common library that does anything specific to your product is something of an anti-pattern in micro services.

Its not really a micro service - its a distributed monolith


There are always going to be cross-cutting concerns. You don't want to have every microservice implement it's own authentication, auditing, etc. from the ground up.

I really hate the term 'microservice', because it carries the implication that each service should be really small. In reality, I think the best approach is to choose good boundaries for your services, regardless of the size.

People forget the original 'microservice': the database. No one thinks about it as adding the complexity of other 'services' because the boundaries of the service are so well defined and functional.


I really like this example. A lot of databases have very good module separation internally. However, you don't often see people splitting out query planning, storage, caching, and etc into separately hosted services forced to communicate over the network; even in modern distributed databases.

Meanwhile, you also don’t see a lot of people claiming you should have one single repository that stores the source code, configs, CI tooling, deployment tooling, etc., for Postgres and Mathematica and the Linux kernel and the Unity engine, or that operating any one of these kinds of systems should have anything to do with running any other system apart from declared interfaces through which they might choose to optionally communicate or rely on each other as black box resources.

Funny, but we saw a debate around monolithic codebases and the monolithic image in Smalltalk.

A team of three engineers orchestrating 25 microservices sounds insane to me. A team of of thirty turning one monolith into 10 microservices and splitting into 10 teams of three, each responsible for maintaining one service, is the scenario you want for microservices.

A team size of 10 should be able to move fast and do amazing things. This has been the common wisdom for decades. Get larger, then you spend too much time communicating. There's a reason why Conway's Law exists.

https://en.wikipedia.org/wiki/Conway%27s_law


I don't think Martin Fowler realized when he wrote the first microservices article that he'd stumbled upon a technical solution to a political problem. He just saw it work and wanted to share.

I don't think Martin Fowler realized when he wrote the first microservices article that he'd stumbled upon a technical solution to a political problem. He just saw it work and wanted to share.

The generation of programmers that Martin Fowler is from, are exactly the people from whom I got my ideas around how organization politics effect software and vice versa. There was plenty of cynicism around organization politics back then.


And it's not as if Martin Fowler came up with the idea as an original either, QnX, Erlang and many other systems used those basic ideas much earlier (sometimes decades earlier). But this is the web, where the old is new again.

It’s safe to say that Fowler rarely claims to originate things. He’s more of a taxonomist.

He says so himself:

> We do not claim that the microservice style is novel or innovative, its roots go back at least to the design principles of Unix.


It's interesting how few executives understand this come reorganization time.

The architecture of software comes to resemble the organization writing the software. Use that fact or it will use you.

Indeed! Conway's the man. Unless your "service" corresponds to an actual existing team who has the time and authority to focus on it, you are asking for trouble and/or wasteful busywork. I curse misapplied microservices.

By the way, for a relatively small service to be shared by multiple applications, try RDBMS stored procedures first.


Agreed. And there's no reason why your monolith need become 10 microservices. It could be split into say just 3 if that makes more sense.

We have a running joke that we run macroservices, which is really just a 4 way split of our monolith once the team grew large enough.

That's a pretty good way of doing it, decouple what you have to but no more than that.

> Funny, but we saw a debate around monolithic codebases and the monolithic image in Smalltalk.

Was there a consensus resolution?


Was there a consensus resolution?

Smalltalk is awesome. Everyone else is doing it wrong, those dirty unwashed!

https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporativ...


Images are hard to version control.

Not when using a Smalltalk aware version control.

> Funny, but we saw a debate around monolithic codebases and the monolithic image in Smalltalk.

What reasons do you have for making that link? What are you refering to?

It's possible to load some code and snapshot as a Smalltalk image; then load some different code and snapshot as a different Smalltalk image.


What reasons do you have for making that link? What are you refering to?

It's possible to load some code and snapshot as a Smalltalk image; then load some different code and snapshot as a different Smalltalk image.

It's a different story when you're working on a team, and a different story when there are two or more teams using the same repository. Sure, you still have the image. The debate had to do with how the Smalltalk image affected the community's relationship to the rest of the world of software ecosystems, and how the image affected software architecture in the small. That "geography" tended to produce an insular Smalltalk community and tightly bound architecture within individual projects.


It's a different story when you have 3 or 4 code librarians responsible for a repository that's used by a dozen teams (ENVY/Developer).

> … relationship to the rest … insular Smalltalk community…

Perhaps not the image per se, so much as the ability to change anything and everything.

Every developer could play god; and they did.


Every developer could play god; and they did.

Turns out that not every god is as wise and as benevolent as every other god.


That's exactly the point!

There were awesome people who did awesome stuff; and there were others — unprepared to be ordinary.


Did anyone ever build a multi-image Smalltalk? For a lot of stuff it wouldn't make sense, but having the ability to separate the image.

Did anyone ever build a multi-image Smalltalk?

People at least played around with that as a research project. There's one that showed up at the Camp Smalltalks I went to, with a weird-but-sensible sounding name. (Weird enough I can't remember the name.)

There would have been great utility in such a thing. For one thing, the debugger in Smalltalk is just another Smalltalk application. So what happens when you want to debug the debugger? Make a copy of the debugger's code and modify the debugger hooks so that when debugging the debugger, it's the debugger-copy that's debugging the debugger. With multi-image Smalltalk, you could just have one Smalltalk image run the debugger-copy without doing a bunch of copy/renaming. (Which, I just remembered, you can make mistakes at, with hilarious results.)

If you do the hacky shortcut of implementing one Smalltalk inside another Smalltalk (Call this St2), then the subset of objects that are the St2 objects can act a bit like a separate image. In that case, the host Smalltalk debugger can debug the St2 debugger.


What do you mean by a multi-image Smalltalk?

A Smalltalk with multiple, separate images loaded at the same time.

I suspect that I still don't understand what you're really asking. Do you imagine those "multiple, separate images" would run in the same OS process?

Otherwise — [pdf] "Distributed Smalltalk"

http://www.cincomsmalltalk.com/main/documentation/VisualWork...

Otherwise (for source code control) — "Mastering ENVY/Developer"

https://books.google.com/books?id=ld6E19QIMo4C


When I bring up Smalltalk, I get an all-in-one environment from the image I load. Its live and any code I add goes into that image. Now I can use code control and build specific images, but it pretty much is a one image at a time world.

What I'm talking about is loading up multiple images into the same IDE and run them like fully separate images with maybe some plumbing for communication and code loading between them. You can sorta pull that stunt by, as stcredzero mentioned, running Smalltalk in Smalltalk, ut I want separate images.



Cool, but that looks like a remoting tool not a lot of VMs on my desktop.

> …loading up multiple images into the same IDE…

At the same time? Why? What will that let you do?


It would let me run a network of VMs with different code that could model my whole solution at once, locally.

> locally

Meaning on a single machine. Not across networks.

> run a network of VMs with different code

What do you think prevents that being done with "fully separate images" (VMs in their own OS process) ?


The last time I played with a Smalltalk, all the code was one big image. There was no way to run multiple VMs.

> There was no way to run multiple VMs.

In this example on Ubuntu "visual" is the name of the VM file, and there are 2 different image files with different code in them "visualnc64.im" and "quad.im".

    $ /opt/src/vw8.3pul/bin/visual /opt/src/vw8.3pul/image/visualnc64.im &
    [1] 8689 

    $ /opt/src/vw8.3pul/bin/visual /opt/src/vw8.3pul/image/quad.im &
    [2] 8690
That's created 2 separate OS processes, each OS process is running an instance of the Smalltalk VM, and each Smalltalk VM opened a different Smalltalk image containing different code.

Do you see?


I’m not sure if we are talking past each other or you are ignoring the whole IDE thing. Yes, I can run multiple VMs on the same machine, but you are missing that I want to spin up these VMs in my Smalltalk IDE and not via some terminal launch script. I want my environment there for me to edit and debug code. I’m pretty sure you cannot do that in VisualWorks.

> I want my environment there for me to edit and debug code

Both of those instances of the Smalltalk VM, the one in OS process 8689 and the one in OS process 8690, are headfull — they both include the full Smalltalk IDE, they are both fully capable of editing and debugging code.

(There's a very visible difference between the 2 Smalltalk IDEs that opened on my desktop: visualnc64.im is as-supplied by the vendor; quad.im has an additional binding for the X FreeType interface library, so the text looks quite different).

(iirc Back-in-the-day when I had opened multiple Smalltalk images I'd set the UI Look&Feel of one to MS Windows, of another to Mac, of another to Unix: so I could see which windows belonged to which image.)


Yeah, but they are 2 IDEs not a single IDE. You are running two copies not one copy with two instances. I then need to jump between programs to edit code.

So when I asked "What will that let you do?", the only "benefit" you-can-think-of is the possibility of switching from editing code in visualnc64.im to editing code in quad.im without a mouse-click ?

So when I asked "What will that let you do?", the only "benefit" you-can-think-of is the possibility of switching from editing code in visualnc64.im to editing code in quad.im without a mouse-click?

No, that would not be enough to make anything work. What I can think of is an IDE that had access to all the VMs running and some plumbing for the VMs to communicate. I would love to be able to spin-up Smalltalk VMs so I can simulate a full system on my desk. Having separate IDEs running means I don't have any integration so I have to debug in multiple different IDEs when tracing communications. I can imagine some of the debugging and code inspection that could be extended to look at code running simultaneously in multiple VMs.


Already mentioned up thread — Distributed Smalltalk.

"Open a debugger where you can trace the full stack on all involved machines."

"Inspect objects in the debugger or open inspectors on any of the objects, regardless of the system they are running on."

April 1995 Hewlett Packard Journal, Figure 7 page 90

https://www.hpl.hp.com/hpjournal/95apr/apr95a11.pdf


I want it all, not just debugging. Distributed Smalltalk didn't do it all in one IDE.

Microservices are interesting.

Not Technically, as they increase complexity.

But they enable something really powerful: continuity of means, continuity of responsibility, that way a small team has full hand of developing AND operating a piece of a solution.

Basically, organization tends to be quite efficient when dealing with small teams (about dozen people, pizza rule and everything), that way information flows easily, with point to point communication without the need of a coordinator.

However, with such architecture, greater emphasis should be put on interfaces (aka APIs). A detailed contract must be written (or even set as a policy):

* how long the API while remain stable?

* how will it be deprecated? with a Vn and Vn-1 scheme?

* how is it documented?

* what are the limitations? (performance, call rates, etc)?

If you don't believe me, just read "Military-Standard-498". We can say anything about military standards, but military organizations, as people specifying, ordering and operating complex systems for decades, they know a thing or two about managing complex systems. And interfaces have a good place in their documentation corpus with the IRS (Interface Requirements Specification) and IDD (Interface Design Description) documents. Keep in mind this MIL-STD is from 1994.


According to Wikipedia, Military Standard 498 has been replaced with ISO/IEC/IEEE 12207. Do you have any experience with that? Do you have experience with any other modern standards for software development?

Not really, it's something I was confronted to when I was working on military contracts a few years ago.

From what I recall, it's very waterfall minded in term of specification workflow, it's also quite document heavy, and the terminology and acronyms can take a while to get used to.

I found it was a bit lacking regarding how to put together all the pieces into a big system, aka the Integration step. IMHO It's a bit too software oriented, lacking on the system side of thing (http://www.abelia.com/498pdf/498GBOT.PDF page 60).


Thanks for the source.

10 teams of 3 each owning their own little slice of the pie sounds like an organizational nightmare; mostly, you can't keep each team fully occupied with just that one service, that's not how it works. And any task that touches more than one microservice will involve a lot of overhead with teams coordinating.

While I do feel like one team should hold ownership of a service, they should also be working on others and be open to contributions - like the open source model.

Finally, going from a monolith to 10 services sounds like a bad. I'd get some metrics first, see what component of the monolith would benefit the most (in the overall application performance) from being extracted and (for example) rewritten in a more specialized language.

If you can't prove with numbers that you need to migrate to a microservices architecture (or: split up your application), then don't do it. If it's not about performance, you've got an organizational problem, and trying to solve it with a technical solution is not fixing the problem, only adding more.

IMO, etc.


"10 teams of 3 each owning their own little slice of the pie sounds like an organizational nightmare; mostly, you can't keep each team fully occupied with just that one service, that's not how it works. And any task that touches more than one microservice will involve a lot of overhead with teams coordinating."

I guess that's where the critical challenge lies. You'd better be damn sure you know your business domain better than the business itself! So you can lay down the right boundaries, contracts & responsibilities for your services.

Once your service boundaries are laid down, they're very hard to change

It takes just one cross-cutting requirement change to tank your architecture and turn it into a distributed ball of mud!


Which has to stand as a damning indictment of the one-service-per-team model, surely?

Something so inflexible can't survive contact with reality (for very long).

At work we run 20-something microservices with a team of 14 engineers, and there's no siloing. If we need to add a feature that touches three services then the devs just touch the three services and orchestrate the deployments correctly. Devs wander between services depending on the needs of the project/product, not based on an arbitrary division.


Well, THERE'S your problem!

If you are doing http/json between microservices then you are definitely holding it wrong.

Do yourself a favor and use protobuf/grpc. It exists specifically for this purpose, specifically because what you're doing is bad for your own health.

Or Avro, or Thrift, or whatever. Same thing. Since Google took forever to open source grpc, every time their engineers left to modernize some other tech company, Facebook or Twitter or whatever, they'd reimplement proto/stubby at their new gig. Because it's literally the only way to solve this problem.

So use whatever incarnation you like.. you have options. But json/http isn't one of them. The problem goes way deeper than serialization efficiency.

(edit: d'oh! Replied to the wrong comment. Aw well, the advice is still sound.)


It might depend a bit on how you scope it, too.

I once worked at a company where a team of 3 produced way more than 25 microservices. But the trick was, they were all running off the same binary, just with slightly different configurations. Doing it that way gave the ops team the ability to isolate different business processes that relied on that functionality, in order to limit the scale of outages. Canary releases, too.

It's 3 developers in charge of 25 different services all talking to each other over REST that sounds awful to me. What's that even getting you? Maybe if you're the kind of person who thinks that double-checking HTTP status codes and validating JSON is actually fun...


I've done that a couple of times. It's a good pattern!

I worked on an e-commerce site a decade ago where the process types were:

1. Customer-facing web app

2. CMS for merchandising staff

3. Scheduled jobs worker

4. Feed handler for inventory updates

5. Distributed lock manager

6. Distributed cache manager

We had two binary artifacts - one for the CMS, one for everything else - and they were all built from a single codebase. The CMS was different because we compiled in masses of third-party framework code for the CMS.

Each process type ran with different config which enabled and configured the relevant subsystems on as needed. I'm not sure to what extent we even really needed to do that: the scheduled jobs and inventory feed workers could safely have run the customer app as well, as long as the front-end proxies never routed traffic to them.


Looks like a service oriented architecture to me

It really depends upon what those 25 different services are. If they are trivial to separate sure. Like an image resizing microservice, an email sending microservice, and so on. I mean go wild, these are trivial. Coincidentally, when people like to talk about how easy microservices are, they love to talk about these trivial examples.

What isn't trivial is when someone decides to make an Order microservice and an Account microservice when there's a business rule where both accounts and orders can only co-exist. Good fucking luck with 3 developers, I'm pretty sure with a team of 3 in charge of 23 other microservices you aren't exhaustively testing inter-microservice race conditions.


Or better yet: a massive front end SPA talking to an array of stupid microservices that are not much more than tables with web accessible endpoints for CRUD and probably an authentication service thrown in for good measure. All of the business logic on the JS side of things. The worst of both worlds, now you have a monolith and a microservices based architecture with the advantages of neither.

I work on an application like that. Not 3:25 but not that far off either. I'm quite content with the situation.

The apps all handle a bespoke data connection, converting it into a standard model which they submit to our message broker. From then on our services are much larger and smaller in number. It's very write-once-run-forever, some of these have not been touched since their inception years ago, resulting in a decreased complexity and maintenance cost.

The trick is not having REST calls all over yours services. You're just building a distributed monolith at that point.


Last startup I've been part of, we've been having a bit of fun building our server architecture in Swift and so far the One-Binary-Many-Services model has been working out pretty well. You can run it all on single machine, you can have debug hooks, that make it seem like a Monolith, or scale it out if need be. When it comes down to it the Authentication Service really doesn't need to know about the Image Upload Service, and splitting it is all about defining good interfaces. Just need to put some effort in to keep your development environment sane.

I'm kind of dealing with your awful scenario right now. It is pretty bad. What happened was the department used to be a lot, lot larger, and people tended to only have to deal with 7 or 8 of them at a time (it was still excessive, I often had difficulty debugging and keeping things straight for tasks) but after two years layoffs, other employees quitting, and executives pulling our employees into other departments, we're a tiny shell of what we used to be, and we still have to manage all of those microservices, and it's so difficult.

I've been daydreaming about monoliths and will be asking at interviews for my next job hoping to find more simplified systems. I came from the game industry originally, where you only have one project for the game and one more for the webservice if it had one, and maybe a few others for tools that help support the game.


We recently gave a name to that One-Binary-Many-Services approach - Roles.

https://github.com/7mind/slides/raw/master/02-roles/target/r...


This wasn't actually that. All of them did the same job, just that one did it for widgets for the widget-handling team, and another did it for whatsits for the whatsit-handling team, and another did it for both widgets and whatsits for the reporting system, etc. etc.

Why does it happen? Consultants, buzzwords, $$

I’ve worked at 2 companies with monoliths that had great products and tremendous business success.

And 3 companies with micro service infrastructures that had lousy products and little business success.

Can’t totally blame microservices but I recall a distinctly slower and more complicated dev cycle.

These were mostly newer companies where micro services make even less sense and improving product and gaining users is king.


The definition of “micro” appears to be hugely variable! If you’d asked me I’d say that sure, my last team definitely built microservices. A team of around 10 engineers built and maintained something like 3 services for a product launch, each with a very different purpose, and over time we added a couple more. Three people maintaining 25 services sounds absolutely bonkers to me.

If your monolith goes unwieldy you have a problem with your code structure which microservices won't solve. As we all know, you need well isolated, modular code with well defined boundaries. You can achieve this just as well in a monolith (and you can also achieve totally spaghetti code between microservices).

Microservices is a deployment choice. It's the choice to talk between the isolated parts with RPC's instead of local function calls.

So are there no reasons to have multiple services? No there are reasons, but since it's about deployments, the reasons are related to deployment factors. E.g. if you have a subsystem that needs to run in a different environment, or a subsystem that has different performance/scalability requirements etc.


That's just "services", though, and it's been the way that people have been building software for a very long time. I can attest to have done this in 2007 at a large website, which was at least 7 years before the "microservices" hype picked up (https://trends.google.com/trends/explore?date=all&q=microser...). When people say "microservices" they're referring to the model of many more services than what you describe, and the associated infrastructure to manage them.

I also think designing in the microservices mindset (i.e. loose coupling, separable, dependency free architecture) is something which can be done on a continuum, and there's not a strict dichotomy between The Monolith and Microservices(tm).

Even if you're working on an early prototype which fits into a handful of source files, it can be useful to organize your application in terms of parallel, independent pieces long before it becomes necessary to enforce that separation on an infrastructure/dev-ops level.


You start with microservices when you realize that including the Elasticsearch API in your jar causes dependency conflicts that are not easy to resolve.

While there are cases where I think microservices make it easier to scale an application across multiple hosts, I don't understand the organizational benefits compared to just using modules/packages within a monolith. IMO a team that makes an organizational mess of a monolith and makes it grow unwieldy will likely repeat that mistake with a microservice oriented design.

And then you pray to whatever God you believe in that you happened to get those 10 abstractions just right!

Even then, that is what libraries are for.



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: