Hacker News new | past | comments | ask | show | jobs | submit login

I work on a project that is somewhere in the middle. We have one repo that builds some microservices. We deploy them like a monolith, though. We have absolutely no compatibility between microservices built from different versions of the repo, and we have some nice tooling to debug the communication.

And we have a little script that fires up a testable instance of the whole shebang, from scratch, and can even tear everything down afterwards. And, through the magic of config files and AF_UNIX, you can run more than one copy of this script from the same source tree at the same time!

(This means we can use protobuf without worrying about proper backwards/forwards compat. It’s delightful.)

I worked at a company where we did something similar to that once. It was a nice compromise.

It was a Rails monolith; one of the larger ones in the world to the best of our knowledge. We (long story greatly shortened) split it up into about ten separate Rails applications. Each had their own test suite, dependencies, etc.

However, they lived in a common monorepo and were deployed as a single monolith.

This retained some of the downsides of a Rails monolith -- for example each instance of the app was fat and consumed a lot of memory. However, the upside was that the pseudo-monolith had fairly clear internal boundaries and multiple dev teams could more easily work in parallel without stepping on eachothers' toes.

My current project does something similar. There's a single hierarchical, consistent, scalable database, and a library of distributed systems primitives that implement everything, from RPC to leader election to map/reduce, through database calls.

All other services are stateless. I just shoot the thing and redeploy, and it only costs me an acceptable few seconds of downtime.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact