Hacker News new | comments | show | ask | jobs | submit login

Since you seem to have it figured out... I'm a monolith guy* in a microservices world. Friday, I had a question asked of me that in our old system was a simple query and I could answer in 5 minutes. This question, however, was split across three separate microservices in the new system, and the information had never been captured in a convenient way in hadoop. (Running two separate queries in two separate systems and writing a program to collate the results takes significantly more developer time. While not insurmountable, it's no longer a trivial task.)

Did you run into these problems? What did you build to solve them?

We have a statistics service for our ML guys. If it needs aggregated data from data stored in different services, the services must implement an API and the statistics layer can then make use of it. Usually one developer will do all of this. We don't have strict ownership.

What we also do is, we duplicate a lot of data into a separate event log. This log is used as a readonly store for analytics. This has the nice additional property that we have an event-log. The micro-services don't write directly to the log, but they can publish events to the broker and a dedicated event-service is used to append the data to the correct log.

Yes. Sometimes all of this is significant overhead, because everything has to be tested as well. But mostly when you start a service, the first thing is to provide all the necessary CRUD routines. Which is 99% of the time a few lines of code as every service is mostly build around at most one aggregate.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact