I'm sympathetic to the cause, but this example would not serve well a noob trying to dive into the subject.
* It does not work (bug in node Dockerfile)
* It requires extra builds (while users expect `docker-compose up` to _just work_)
* There's no tester container to curl something to ensure me it works
* The docker files are half baked - ex. not using layers correctly (npm install called after code COPY), ex. `npm build` redundantly called, ex. using ubuntu:latest rather than golang:onbuild... and so on.
And yes, I guess I could fix all that... submit a PR, and be declared the king of HN. But today I rather just complain.
I think unless your Netflix, jpmorgan card services, or something else equally large this approach is a mistake.
To me, the main benefit of the polyglot SOA paradigm is: we simply just can't find enough talent located in y skilled in x.
Most services complain of complexity from "the monolith" but upon further inspection, what we're usually see is poor architectural layout. These teams feel that microservices solve this, but really just make the entire backend more complicated.
The same can usually be achieved through enforcement of an API contract and good leadership rather then TLS handshakes.
My opinion here is highly biased, since I developed it while working at Netflix, but I think you're dismissing a lot of non-technical benefits of microservices.
I'll start off by saying that I agree with you in that when it comes to small teams, microservices is probably not the best option, as it adds additional unnecessary complexity.
But one of the best benefits I saw of microservices was that it allows small teams to work together mostly independently. It meant that each group could run in whatever way was best for the four of five folks in the group.
If the auth group really thought Go was the best solution for their problem, then they could do that. If the API team wanted to do a deployment every two weeks on Wednesday, they could do that. If the tools team wanted to deploy on every checkin, they could do that.
It meant that the 1000 engineers didn't have to all agree on a particular language, development method, or development cadence. Each team could do whatever worked best for that team, and was most in control of their own success or failure.
The debate is where that threshold is. 5 engineers, no. 1000 engineers, yes. But most engineering orgs are in between. I suspect microservices start to make a lot of sense at around 50 individual contributors. At that point you've got 10 pizza sized teams and making decisions that affect all of those teams is taking more time than building out the infrastructure needed for them to work independently.
You may have 5 engineers today, but what if you have 1000 engineers tomorrow? Do you start from scratch, or do you use microservices now in case you do grow in size?
some of the worse evils I've seen in software development is a team of 5 trying to act as if they were a team of 1000. There are inherent costs in microservices, they don't come without tradeoffs. Those tradeoffs seem uncessary for a 5 engineer team (sometimes it makes sense). Not only that, it's very unlikely you are going to predict correctly how your product will evolve by the time you get to 1000 so even if you did microservices, they probablly won't be right for the future evolved product which will likely include new innovations you can't yet design for.
All you do, as a team of 5, is write nice modular composable well tested code. That way your code base will adapt as you grow. It's quite likely going from 5 to 1000 people is going to involve a number of large architectural shifts during the life of the product.
> Most services complain of complexity from "the monolith" but upon further inspection, what we're usually see is poor architectural layout. These teams feel that microservices solve this, but really just make the entire backend more complicated.
I agree with this, but there's another very important driver for microservices - the presence of multiple customers, each with different ideas as to what makes up the right stack - i.e. SaaS.
In our domain (HR software) microservices make good sense because they allow each customer to compose their own stack, with individual bits coming from us or from third party vendors.
Without this need, I don't think I'd recommend most normal scale companies (Netflix et al excluded obviously) go microservices.
This is interesting, but practically speaking, isn't it better to stick to one language on backend and one on frontend? Seems like you'd need to be proficient in nodejs, go, java and javascript.
That's certainly a valid argument, but the most compelling use-case for a microservice architecture is when your organizational structure (and the specific technical strengths that come with that) mirrors the application structure. There may be a really good reason to write one backend API in Go and the other backend APIs in Java, or whatever. This architecture simply takes that constraint as a premise. Solutions that prescribe diverse polyglot architectures without this constraint are, as you seem to intuit, missing the point.
It's a compelling idea, for sure, but it tends to delay a bunch of bigger costs. It _feels_ super productive to use "the right tool for the job", but at scale this ends up being really expensive. You end up with an organization that grows using different toolsets, libraries, build/CI systems, training/onboarding, test harnesses, developer mobility, etc. As things grow, inconsistency and inability to share become extremely difficult problems.
> Seems like you'd need to be proficient in nodejs, go, java and javascript.
That's perfectly reasonable. Three imperative garbage-collected languages, two statically-typed, one dynamically-typed. That's easy enough to understand.
You would also likely need to be proficient with a shell and OS, and with all four toolchains.
Really, there is still only one backend (the shell), and one frontend (the browser). There is some overhead from using four separate runtimes, but that isn't a serious issue.
The greatest difficulty is communicating between each part and the OS/shell or each other. From what I can tell, this does the latter, each part communicating over a TCP socket.
A solo developer might well have the most reason for a polyglot architecture: they won't have time to write everything personally, but the best or most convenient libraries might be in different languages. Maybe the SaaS provider you want to use wrote libs for their complex API, but only in languages you hadn't chosen. Or whatever. As long as it's not something you'd need to call dozens of times per request, you can quickly spin up an microservice using almost any language to front-end that must-have library.
Working with a microservice architecture will always be more work at the beginning, because you have to build the infrastructure before you even begin.
Does anyone have a great example of an app that just has sane AuthN and AuthZ examples with users and some sort of basic crud operations? I've been working professionally as a software engineer for five years and of the two companies I've worked at, the user role checking was custome roled and spread all throughout the stack. I feel like I've never seen a good example of something like this that's more complex than just user can or cannot edit. Something that entails groups and read/write permissions on those groups for userse. Right now, I'm working on an Elixir API, but it seems like most of the user role checking and builtins are all meant for AIO webapps instead of APIs.
Ahhh. That rabbit hole is deep. Buckle up if you're going in because it starts ugly and gets worse.
I've spent quite some time exploring the caverns of XACML (eXtensible Access Control Markup Language), even going so far as writing a limited implementation of it in JavaScript. It's infinitely flexible, extremely capable, horrendously complex, and just about the least fun standard to work with. Sure as heck gets the job done though. Just get yourself used to writing and debugging XML and you'll be fine.
I've also looked in great detail at Amazon's IAM policies. These are significantly simpler, and heavily inspired my current favourite library, ladon [1]. I recently wrote a GraphQL API and I found that GraphQL mutations and field accesses mapped nicely to policies in ladon.
It sounds like you're thinking of proper object ACLs, which get pretty complex quickly--see Spring domain object security, Windows / NTFS / Active Directory DACLs, or the often-challenging permission system in AWS. These draw in some common ideas like having a taxonomy to identify the principal and security context, the target of the operation, the operation itself, the object's security parent, whether or not ACEs can be inherited from the parent, and whether to audit success and/or failure conditions.
Thanks for the feedback :) The bugs are being fixed. Frankly speaking, I didn't expect this exercise to get such popularity :D Anyways, the primary purpose of the task is to test how various distributed tracing tools (Jaeger, Zipkin, Instana, etc) integrate with into a polyglot distributed app (yup, the company I work for uses more than 3 languages and frameworks). But now it looks that this toy needs to be maintained properly.
I'm not the author but probably the intention there is to have a minimal one executable container and before the recent multistage builds feature it was kind of a pain to implement.
This is all fine and dandy until you add persistence / need microservices talk to each other, follow requests through the entire flow etc.
Just think about user creation. After a user is created, a common thing is to send a welcome email. Then you probably need to build an email service because you probably want to mail other times as well.
But if an email fails to be sent because there is an error in the service, you need to build around that and report it somewhere. You would probably make some kind of class or library that makes these requests to other microservices and now you're already on a path to hell because then that library is in language x and cannot easily be ported to another service etc.
Polyglot microservices is a shitty idea. It makes a hell for developers, a hell for maintenance and it doesn't give the end user any more value.
I am not really a fan of microservices as a concept in general, but polyglot microservices, seriously who would want to do that?
Welcome to modern web application development, where we're constantly making everything harder for the developer while providing absolutely no noticeable positive change for the end user and still breaking web conventions at every opportunity.
This is the cons of microservice approach anyway. Imagine now several microservices and you need to update the contract of one of them that is dependant on few others!
* It does not work (bug in node Dockerfile) * It requires extra builds (while users expect `docker-compose up` to _just work_) * There's no tester container to curl something to ensure me it works * The docker files are half baked - ex. not using layers correctly (npm install called after code COPY), ex. `npm build` redundantly called, ex. using ubuntu:latest rather than golang:onbuild... and so on.
And yes, I guess I could fix all that... submit a PR, and be declared the king of HN. But today I rather just complain.