
Microservices from a Startup Perspective - andima
https://www.infoq.com/articles/microservices-startup
======
vorpalhex
Microservices from a startup perspective: don't.

Microservices are great. They are also complex, expensive and require much
higher expertise.

Start with a well groomed monolith that is well modularized then split it when
scale (user scale, team scale) makes sense.

~~~
staticassertion
> They are also complex

I would say the complexity is largely in deployment. Otherwise, I find
microservices much simpler to reason about.

As an example, keeping a bunch of small, isolated microservices behind queues
means that I can reason about them very locally. I know what data they own, I
know their interfaces, and I don't really need to think 'outside' fo them -
they just send other asynchronous messages out. I don't need to think about
dependencies very much - if a downstream or upstream server fails it is
irrelevant to the microservices, so long as the queueing system is up.

It's a bit more annoying to get infrastructure and whatnot set up, but not a
ton.

> expensive

Maybe? Shouldn't finger grained scaling be cheaper in some cases?

> require much higher expertise

I don't really agree. Again, by keeping things small and by maintaining the
ability to reason about components locally, I think that it's easier to work
with microservices than monoliths.

> Due to the monolithic software architecture, it was difficult to add new
> features without affecting the entire system and it was quite complex to
> release new changes, since we had to rebuild and redeploy the entire
> product, even though we changed only a few lines of code. That resulted in
> high risk deployments which happened less frequently - new features got
> released slowly.

From the article. For a startup the ability to add features quickly is pretty
important - especially early on.

I'm not advocating microservices first for every situation, but I've built
systems microservice-first and it's been fine.

~~~
vorpalhex
> just send other asynchronous messages out

So we're sending a new data model in the form of JSON over TCP using AMQP? Or
are we using STOMP over websockets? Who maintains the broker? Do we use a
direct exchange or topic exchange? How do we replicate messages across
exchange nodes to ensure uptime? How do we upgrade the broker?

> Shouldn't finger grained scaling be cheaper in some cases?

Monolith, basic high availability is 3 servers, minimum. Now every single
microservice needs 3x servers, plus autoscaling, plus we need things like
distributed logs, a resilient broker, etc. Even simple things like uptime
pinging are more expensive because we're tracking more hosts.

Sure, if you ignore everything outside of your system then it's really easy to
work on a single microservice, and if your startup has several million dollars
and hundreds of engineers - great, use microservices.

~~~
staticassertion
> So we're sending a new data model in the form of JSON over TCP using AMQP?
> Or are we using STOMP over websockets? Who maintains the broker? Do we use a
> direct exchange or topic exchange? How do we replicate messages across
> exchange nodes to ensure uptime? How do we upgrade the broker?

Uh? I don't know? Like, I know how I've done it, idk how _you_ want to do it,
and I won't prescribe implementation details as if they're universal.

> Monolith, basic high availability is 3 servers, minimum. Now every single
> microservice needs 3x servers, plus autoscaling, plus we need things like
> distributed logs, a resilient broker, etc. Even simple things like uptime
> pinging are more expensive because we're tracking more hosts.

If you can run your entire project on 3 systems, yeah, sounds like a case
where splitting out would be more expensive. Though you can do microservices
without having separate hardware - just stick them on the same box. You still
get software fault isolation.

> Sure, if you ignore everything outside of your system then it's really easy
> to work on a single microservice, and if your startup has several million
> dollars and hundreds of engineers - great, use microservices.

That seems like an exaggeration.

------
thinkingkong
The original goal for moving from a monolith to a microservices architecture
was loosely stated as “its hard for us to work on the app together and ship
features”. Something that isnt addressed in the results or suggestions.

How much did the team improve shipping features for customers? Is the app more
reliable? Does the engineering team spend way more time on CI/CD and all the
other “overhead”?

Most teams will never need this type of solution, even though theyre tons of
fun to build and operate. I just wish we had a better way of sharing the non-
engineering results of these types of efforts.

~~~
matwood
> Does the engineering team spend way more time on CI/CD and all the other
> “overhead”?

CI/CD is required regardless of microservice or monolith. Once you have it
working for one deployment, it should be trivial for 1+N.

~~~
thinkingkong
Id welcome a good discussion on triviality in this case. Wiring up jenkins or
something to just deploy when you press the button can be easy. Wiring it up
so distributed services being augmented, deployed, migrated, etc all without
downtime - to me at least - is complex enough to not be trivial.

~~~
matwood
IMO, CI/CD already means zero downtime deploys. After a team has done the CI
work on commits and zero downtime CD, it doesn't matter if it's for a monolith
or microservices. The initial work is not trivial, but once it is done you
just copy it for each service.

Maybe you are thinking about how do you deploy inter-service dependencies? In
that case, you don't. I'm not joking. When you move to microservices you have
to treat other internal services like they are something you do not control.
This means maintaining backwards compatibility, versioned APIs, etc...

At first it seems like it is harder, but once you get in the mindset that this
always must run and serve existing clients it is rather freeing. Plus, being
able to treat each service on its own makes it easy to focus and do large
changes - thus move faster.

Compare this to friends who tell me about logging in at 11PM to deploy because
they have to take the system down. Completely taking down the system to deploy
should almost never happen today.

------
ris
I'm already seeing the first generation of developers who've only ever worked
in an environment that's trying to do microservices and think that the only
way of encapsulating code is by making bits of it talk http to other bits.

~~~
ummonk
This. Distributing your components over a network certainly forces
encapsulation, but you can have encapsulation without all the complexities
that come with running microservices in separate processes / machines.

------
andy_ppp
Why not just choose Akka or Elixir and then you App will be built with message
passing from the start. This makes things trivial to break off onto different
machines when you need to.

If you choose Elixir you get OTP which does lots of excellent things to be
able to scale out your app, the data passing is transparent between nodes
(i.e. no protobuf or GRPC here). Excellent ways to monitor and deploy your app
(and inspect messages as they are passed). I really need to write up the huge
amount of work we’ve put into node and go microservices that are already part
of the Erlang/Elixir ecosystem...

For startups don’t bother, if you do it right it will literally kill your
startup before you see the benefits.

~~~
fnl
Good point, but for a Startup I think even that is too low level. I would
suggest something like Istio or Lagom (which is built on top of Akka).

------
supernovae
Very detailed blog with lots of wise words used, but very little actual
experience shared.

For a startup - all this may feel OK like you're cheating the system and
beating others to the punch - but damn, if you actually make it - good luck
scaling that up.

Personally, I don't see faas a competitive market advantage. It's a tool and
i'd be a hell of a lot more selective in how I used it.

~~~
scarface74
_For a startup - all this may feel OK like you 're cheating the system and
beating others to the punch - but damn, if you actually make it - good luck
scaling that up._

Why is that a problem? I am speaking in C# terms...

Phase 1 Monolith: all of your code is one solution, with separate domain
specific projects (assemblies) with separate namespaces with a logical use of
access modifiers exposed as an interface. All of the other code uses
interfaces as dependencies defined with a DI framework.

Phase 2: We need scale _now_. Bring up a lot more VMs at the click of a button
behind a load balancer and you get scale quickly.

Phase 3: Take your project that you need to scale independently out of the
monolithic solution, expose it as API, put that one API behind a load
balancer. Then have a proxy class in your client and use your DI to map your
interface to your proxy. If you integrate something like Swagger into your
API, it can create the proxy class for you.

If you can deal with eventual consistency, use a queueing mechanism.

Rinse and repeat and keep taking projects out of the monolith as needed.

There will be some shared code you don’t want to run out of process. For that,
create Nugget packages and you won’t have to change your client code at all.

~~~
holtalanm
I was thinking something along these lines, as well. I recently learned of the
IDesign Method, which seems to try to do something like this, with a standard
architecture for handling scaling in a future-proof kind of way.

Forgive me if my understanding of that architecture is wrong. I only have read
parts of the book so far.

~~~
scarface74
It's basically Domain Driven Design combined with SOLID principals. If you
have a Customer domain and an Order domain, as tempting as it is to join the
two related tables in code, avoid it. Have another service (not http
service/domain service) aggregate information from both domains.

It's then mechanically trivial to split the Customer module and the Orders
module both the code and the data store.

