Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, we do micro services, the "real" kind. Not the "SOA with a new name kind", but the "some services are literally a few douzan lines of code and we have 100x the amount of services as we do devs" kind.

The thing is, you need a massive investment in infrastructure to make it happen. But once you do, its great. You can create and deploy a new service in a few seconds. You can rewrite any individual service to be latest and greatest in an afternoon. Different teams don't have to agree on coding standards (so you don't argue about it).

But, the infrastructure cost is really high, a big chunk of what you save in development you pay in devops, and its harder to be "eventually consistant" (eg: an upgrade of your stack across the board can take 10x longer, because there's no big push that HAS to happen for a tiny piece to get the benefits).

Monolithic apps have their advantages too, and many forget it: less devops cost, easier to refactor (especially in statically typed languages: a right click -> rename will propagate through the entire app) and while its harder to upgrade the stack, once its done, your entire stack is up to date, not just parts of it being all over. Code reuse is significantly easier, too.




Yeah, add in things like MORE THAN ONE PRODUCTION ENVIRONMENT and LETTING YOUR CUSTOMER HOST AN INSTANCE OF YOUR MICROSERVICES and you have guaranteed your own suffering.


It is a matter of tooling. One data center or ten, it does not matter much with proper tooling. We deploy to seven data centers with a click of a button, with rollback, staggered deployment etc. Centralized logging using ELK gives us great visibility in to each DC, without worrying about individual microservice instances.


Easy until you realize you need to somehow manage + configure hundreds of services to run your dev environment...


beyond just the docker environment, you only need to be able to run the service you're working on locally. Anything you don't run local should hit some shared dev/QA infrastructure (which share a db with local). Whatever you use to develop should be able to detect what you have running locally and prefer those when available.

Anything you're not running locally just hits the shared infra.


Dockerized apps make it simple to run on dev environment, which is what we do. Of course , one cannot run everything on a laptop, we have a dev cluster


"Different teams don't have to agree on coding standards (so you don't argue about it)."

Unsure if sarcastic.


Not at all sarcastic. I've seen endless wheel warring over spaces/tabs, level of indents. Mostly I ignore it.


Seems like a terrible idea if "different teams" actually means "random assortment of developers for this specific project." If it actually means "different teams," e.g. you rarely if ever would move from one team to another, I don't see the issue if one team uses tabs and one uses spaces, or you have different naming conventions or whatever.


Well, you use the standard you like, and the other team can use the standard they like. Then you write a micro service to convert from one standard to the other.


So that sounds pretty much like a function call in a monolithic app. How do you store state? I assume you need to between all those microservices.

>The thing is, you need a massive investment in infrastructure to make it happen.

I thought that one of the selling points of microservice architectures was the minimal infrastructure. I am really struggling to see an advantage in this way of doing things. You are just pushing the complexity to a dev ops layer rather than the application layer - even further form the data.


I wonder what language I would pick for that. I usually use Scala, but it seems a bit silly when the footprint of the platform would massively outweigh the actual service. I don't like Go. I like Python, but I prefer static typing. Rust seems a bit too low level (although I'd like to try it for embedded). I don't see any point in learning Ruby when i already know Python well.

Maybe Swift? Scala Native in a year or two? I've done a little Erlang before, so maybe Elixir?


Building Microservices needs discipline, an eye to find reusable components and extracting them and as you said, investment in infrastructure.

Monoliths invariably tend to become spaghetti over time, and completely impossible to any non trivial refactoring. With microservices, interfaces between modules are stable and spaghetti is localized.


Can you expand on how you do logging/debugging/monitoring?


what are the big infrastructure costs?


Deployment has to be easy. Create a new service from scratch, including monitoring, logging, instrumentation, authentication/security, etc, and deploying it to QA/Production with tests has to take minutes from the moment you decide "Hey, I need a service to do this" until it's in prod.

Because individuals may be jumping through dozens of services a day, moving, refactoring, deploying, reverting (when something goes wrong), etc. It has to be friction-free, else you're just wasting your time.

eg: a CLI to create the initial boilerplate, a system that automatically builds a deployable on commit, and something to deploy said deployable nearly instantly (if tests passed). The services are small, so build/tests should be very quick (if you push above 1-5 minutes for an average service, it's too slow to be productive).

Anyone should be able to run your service locally by just cloning the repo and running a command standard across all services. Else having to learn something every time you need to change something will slow you down.

That infrastructure is expensive to build and have it all working together.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: