> With this transformation, microservice architectures have become the standard for building cloud native applications, and it is predicated that by 2022, 90% of new apps will feature microservice architectures.
What do other engineers think of this statement with regard to startups and MVPs? Am I wrong for thinking that monolithic architectures are still the best way to get started from 0? Is it simply a matter of better tooling to make creating a micro-service architecture as easy as spinning up a rails or django api?
As an architect, and one using primarily Microsoft technologies at that, I really doubt Microsoft's 90% figure.
Microservices have their place, but I really think monolithic architectures are far more suitable for the vast majority of cases.
Every time I've seen microservices in use in the enterprise space, it's been a complete mess - no documentation, incredibly brittle services written in a multitude of languages using just as many idioms, devs scared to touch anything lest they break it, no standardised logging... doing microservices "right" requires a highly disciplined team with processes and DevOps enshrined in the company.
The same report predicted that "500 million apps" will be created between over the 2018-2023 timeframe, which looks suspiciously like someone saying "100 million a year, what do you think Frank?" and then getting the nod because "a billion" is even more clearly absurd.
It will be made so trivial to do so it will be impossible to avoid it
Medium will be awash in howtos, github will have 9,000 CLI tools to generate yml templates for all this shit
Nothing stopping us from taking advantage of this and building local hosted Docker/Kube setups importing from github what they need to run private social media portals for friends and family, with dynamic DNS and other tools for finding each other
You don't need microservices if you think your business will eventually fail. On the other hand, if its a success then at some point microservices are much more maintainable. Its extremely difficult to change a monolith to microservices. Right now there is a heavy infrastructure prices to pay for microservices but if that gets solved then why would you not start with microservices?
Microservices are for scaling development teams. They're not inherently better to maintain.
You just can't reliably work on one artifact with dozens of people without creating a lot of complexity, this is why microservice architectures can be better in some cases.
If you're not going to increase the development team beyond a single smallish group of people, you'll be much more productive and your service will be way faster as a monolith.
I'm pretty sure most SaaS companies want to scale their development teams, so it's basically a given that most want to use microservices as well.
Individual microservices are definitely easier to maintain than a monolith. Ever experience a big ball of mud? Every monolith I've worked with, turned into one.
Sounds like you are arguing that for small teams a monolith is better. No one would argue that they are not. If you read my post, I talked about business that become a SUCCESS meaning that they grow beyond that initial small team. Once your team becomes large, if you started with a monolith, you will eventually need to either break it apart into microservices or try to manage the monolith across many teams.
IF microservices could be as easy to develop from day 1 as a monolith, why would you start with a monolith? Isn't that what technologies like Dapr are trying to do? Make developing microservices easier?
> Individual microservices are definitely easier to maintain than a monolith. Ever experience a big ball of mud? Every monolith I've worked with, turned into one.
Can we just create a Visual Studio extension that allows you to pretend the various components of your software are "microservices"?
"Sorry, you're trying to stuff too much functionality into your XYZ class. You'll need to break it off into another class if you want the Pseudo-Microservice plugin not to complain."
Too much functionality in your XYZ class is not generally the problem. Its the dependencies between the classes and the way the frameworks (looking at you spring and hibernate) can make it too easy to couple your code via transactions, ORM graphs, etc.
All modern ides analyse dependencies.
Spagheti is usually built when there are too many people working on one project.
Microservices is to scale people, no need for microservices when your company has very few employees.
This is an engineering argument that's disguised as a business argument. In reality a monolith had way less overhead when starting a new product or business and makes a lot more sense than sinking resources into something that isn't even needed at the time. Not to mention that from an engineering standpoint you can't even know for certain which components need to be broken out onto a microservice without the evidence of traffic.
Following your logic to its ultimate conclusion, you may as well also over-hire on staff, get a super expensive office, crazy benefits packages, etc. Why not? You dont need that stuff if you think your business will eventually fail.
If you read my post I never argued against starting out with a monolith. Of course they have lower overhead. I argued that if your successful like say Twitter, then you will eventually have a problem with your system all tied together in a monolith.
> Its extremely difficult to change a monolith to microservices
I beg to differ. It's completely straightforward, for the most part. You split them out, one by one. That's a big part of the reason for its popularity.
If it's really that difficult, maybe it's not a great fit for the problem at hand.
No, the problem is it can be extremely easy to break services into microservices, especially using the slick tools I've seen in Azure (I'm sure other providers have them as well). They just work, and they're cheap!
But then you realize debugging and refactoring is way harder, and you aren't getting any real benefits from splitting things apart like that.
OK agree to disagree then. I spent a year on a team trying to break a monolith into microservices generally we failed. We ended up with microfrankenservices that had no clear seams. Problem is generally the state gets mixed up throughout the app. Unrelated entities participating in the same transaction for example. References to domain objects everywhere in the monolith.
The first developer (who was fired) of the startup I am working for exactly presented this arguments. Not having any experience in developing and maintaining any real system and reading some consultants' blog he was confident that the microservice architecture will solve all the problems. The product got delayed by months, extremely buggy and (god knows why) it was an asynchronous architecture (the front-end would have to poll for basic CRUD operation). This costed us losing a big customer.
If you are starting out, please make sure the complexity and development overhead for a microservice is worth it. Don't listen to hypes.
The system you build initially when you're not exactly sure how your product or business are going to work should definitely not be the system that you then take and "scale up" to 100+ developers. Any expectation of doing that is foolhardy and a sure recipe for never emerging from the tarpit of microservice complexity.
What on earth makes people think the responsibility divisions they make on day 1 are going to be the right responsibility divisions further down the line? Transplanting parts of your data model across boundaries is a complex and painful job.
No. The real use cases for true microservices are only 10% but at this point in the hype cycle you would be crazy to even suggest that would design/develop a monolith. You will be looked upon as someone from another generation. This is so unfortunate but it’s the reality in a lot of organizations that I see.
Also read as serverless and you will understand the hype. I think depending on the type of application I'm building I would use either approach. I also am a fan of having both approaches where sensible, but only if I'm already deploying to the cloud. If I'm on local hardware I don't care it can all be monolithic until something in my service requires more dedication.
I think this really begs the question of what exactly is meant by a microservices architectures. In my mind this is one of those subjects that is almost infinitely variable, producing a range of responses from everyone you ask. Personally, I contribute to this mess by thinking that a great way to start an MVP is as a monolith organized into microservices, that is, to adopt some of the the build-time organizational concepts but to just run it like any old monolithic app, without the runtime complexity of a "true" microservices stack.
i think anybody saying the word "microservices" is bullshitting. it's just services. if you are doing api requests anywhere from inside your app - you're already doing "microservice architectures". db is a "microservice", kv store is a "microservice", etc. that 90% figure relates to something so vague that it's possible to argue it's already true, no need to wait until 2022.
I also think it's crazy.
Even in JVM land, spring boot is quite fast at getting a POC done.
Setting up infra for microservices when you are rushing for demos is too much.
I think that's exactly wrong. Microservices might end up winning because there will be no infra to set up. Microservices communication and scaling is a pretty generic problem: it is automated and cloud providers offer managed solutions (eg Serverless). In the end, you just deploy code.
At the other side of the room, monoliths always need some custom provisioning and deployment. If it has to evolve into more than a monolith (queue, workers...), it's that much more infra to setup and maintain.
I don't think people talking about microservices for POCs and MVPs are talking about maintaining your own cluster and handling scaling yourself. You go for managed solutions that allow you to focus on delivering value.
For startups that matter, the need to break up the monolith will surface pretty quickly. When a single database can no longer handle the traffic anymore, and/or the eng team size grow past 100 or so.
Also, as tooling improve, developing microservice is not that much harder than developing monolith.
Why does scale suddenly warrant microservices when we’ve had countless examples of successful sites that scale without them? I agree microservices seem to be a reasonable decision to make if you find yourself with 100 engineers. But let’s not conflate something that falls out of conways law with the idea that it’s also necessary to get computers to serve more users.
Interesting. One of the heavily marketed advantages of using micro-services is the ability for individual sub-systems to scale independently of one another, which is obviously a cost optimization that matters once you reach a certain scale, but is isolating components to dedicated teams the larger advantage?
The comment you replied to mentioned "startups that matter" that have significant scale and have 100+ engineers. I think if you're at that point it makes sense, but when you're finding product/market-fit and have 1-2 founding engineers working on a codebase, is it safe to assume micro-services are a premature optimization that unnecessarily add to a founding team's already heavy workload?
Microservices and their infrastructure are shit to maintain on a small team. There is more overhead in that than monolithically designed software. The best use of microservices on a small team is for appropriate reusable services with low maintenance or almost no maintenance at all.
> One of the heavily marketed advantages of using micro-services is the ability for individual sub-systems to scale independently of one another, which is obviously a cost optimization that matters once you reach a certain scale, but is isolating components to dedicated teams the larger advantage?
It's not like this is a new and unknown opinion... It's been repeatedly pointed out for years now.
With this transformation, microservice architectures have become the standard for building cloud native applications, and it is predicated that by 2022, 90% of new apps will feature microservice architectures
If I understand it correctly, it will make it easier for me to build applications by separating the "plumbing" (stateful & handled by Dapr) from my business logic (stateless, speaks to Dapr over gRPC). If I build using event-driven patterns, my business logic can be called in response to state changes in the system as a whole.
I think an example of stateful "plumbing" is a non-functional concern such as retrying a service call or a write to a queue if the initial attempt fails. Since Dapr runs next to my application as a sidecar, it's unlikely that communication failures will occur within the local node.
I like the idea of actors to encapsulate state management for my aggregates (a la Domain Driven Design). But I haven't wanted to be limited to just using Elixir/BEAM or an actor toolkit on the JVM.
I'd wager that Dapr's virtual actors [1] were inspired in-part by Microsoft's work on Orleans [2]. I've read through some of the Orleans docs, and Dapr looks to be a more accessible (cross-language, non-CLR!) way to build using some of Orleans' concepts and capabilities.
!! this is super cool: "You can also perform aggregate queries across actor instances, avoiding the common turn-based concurrency limitations of actor frameworks [3].
Dapr's virtual actors sounds like a good idea, but besides the API spec[1] is there any better doc to have a look to an example? How are the callbacks used..?
I want to give it a try, but really seems that is pre-alpha.
Microservices is absolutely meaningless. It's just SOA (services-oriented architecture). Services = groups of functionality and business logic that interact.
How you deploy these services can be completely arbitrary. They can be split up in different methods, classes, assemblies, or entirely separate processes running on different servers.
Going straight to that last option is completely unnecessary for the vast majority. And when it's needed, it's actually because of team organization rather than app architecture.
I’ve seen small teams with microservices succeed but not in the speed at which these things promise to return value. I’ve also seen developers fail when they’ve churned out microservice after micro service and couldn’t keep up with the deployment and infrastructure overheads. As an architect I’ve recommended against some of these but what do I know. I’ve just recently asked for a service on a project and they said microservice, and then suddenly I need Orchestration and scheduling. Btw I think k8s has its place I’m not against.
I have seen services architected and deployed beautifully, http, tcp, MQ based, event driven, Ive designed and deployed services as windows services, and Linux based web services, http, API. Seen WCF, XML, rest, json, MQ, messaging across multiple stacks and technologies but I won’t back 90% of new apps feauturing microservices, more like 1 or 2 service endpoints, when what we actually need is governance and thought in these designs. Sometimes a service is a service but doesn’t need early optimisation. Think about decoupling and domains by all means but don’t jump into the next framework marketed at you.
Microservices are usually a bad idea for software unless your whole business model depends on lots of people using your software down the line. It's certainly not good for internal software.
What people really need to do is follow SOLID principles, use interfaces appropriately, and just generally practice good architecture.
The real benefit is the separation of concerns, which microservices promises to deliver. But well designed software does that too, and you don't have the headache of coordinating all that hardware.
I guess it's easier to sell something trendy to upper management (we're doing the new, hot thing) than to sell a code rewrite (we're not going to ship new features for a while as we opaquely push around code).
So, it does messaging, pub/sub and discovery/orchestration?
Kind of like a Redis, possibly with an etcd - all massaged into a structured/uniform Api? Is that about right?
Is it only for small (ie: overengineered) setups - or is the idea that it can grow to handle millions? of messages etc?
Normally "does it scale?" isn't very interesting - but in this case it would seem to be redundant overhead if it does not let you grow to a lot of concurrent traffic?
Hm. So there is some magic to avoid all roc calls and pub/subs going through dapr?
Eg: if I have a thousand users subscribed to a channel - only setup would involve dapr, and the rest would be dependent on my pub/sub queue manager (eg: plain redis, or rabbitmq) keeping up?
But from the [ed: dapr api microservice component] consumer viewpoint it's just "ask dapr for subscription", "ask dapper to broadcast a message" - and you're off to the races?
Having the calls go through Dapr is actually a pretty powerful feature: Dapr will handle retries and handle failures and guarantee an at-least-once delivery. For high-throughput scenarios we have a gRPC client which can see a much higher throughput than regular HTTP.
Yes, you are right: from the consumer's viewpoint, it's "ask to subscribe, ask to publish a message" and you're off.
So it would be fair to say that dapr scales to "fairly big" throughput - but not the right tool if your goal is in the ballpark of saturating a 10gps network with messages?
(mind, I have no such ambissions presently, just trying to get my head around what dapr is - and is not)
> but not the right tool if your goal is in the ballpark of saturating a 10gps network with messages
If that's in any danger of happening, then you scale out not scale up. e.g. run 100 worker instances to process the messages. each one gets 0.1 gps of messages.
Microservices are good at scaling out as needed. I mean, not completely painless, but better than the alternatives.
Right, I know the example is somewhat hyperbolic - but one could argue that it's quite possible to build a micro service architecture around zeromq or rabbitmq - and those systems should (in theory) allow you to max out your modern hardware.
So in the sense that you might not really need micro service architecture for your regular "new thing" (ie you can run it on a modern server with 128 threads and it fits in half a terabyte of ram or some such "medium" size workload) - it's interesting to see how dapr affects you if you do need to scale.
If you struggle with N to M messaging, you might not want to multiply N and M with Y dapr nodes-because that might make your solution more difficult to scale.
That said I think there's a valid case to be made that it can be valuable to suffer a micro service architecture and the possibly massive signaling overhead - not to "scale", but to gain resiliency and stability. Failover, easier deployment of new code etc.
Basically anything that can give you "works like big iron" on whatever scraps of commodity hardware you're able to rent.
> you might not want to multiply N and M with Y dapr nodes
That would be bad, but my understanding of the "sidecar" is that there would be M dapper nodes, each attached to a main consumer in a container. (and probably add N dapper nodes as well, if the N senders are also containerised and managed this way)
Definitely not as fully fleshed out but we made some choices on the serialization/pub-sub side that allowed for more efficient binary message-passing and solved some of the slow-subscriber problems for high-frequency data.
Then we will see a shitload of articles popping up "10 reasons why Monolithic software development was the right way" and "How we reduced cost by merging all our micro services back to a monolith!!!11!!".
Microservice is a model for team development. It allows the dev team easy to scale.
If you have only one developer like many startups do, it does not matter at the beginning, especially considering many startups never would have more than 2 devs before failing.
given the sprawl of a ton of micro-services has anyone based on a central service bus style maybe with kafka where services work a lot like unix pipes instead of ever changing APIs -- sure GRPC mitigates some of that but I am curious as having a central source of truth that is also stateful but not a relational database etc., that works and the whole pub/sub style
Far from it. Although Dapr borrowed the Actor Framework and Stateful features from Service Fabric, There are a couple of major differences.
a) Stateful services: Your code + data wont' live together on dapr. You would need an additional I/O hop to access cosmos or redis [or your own state provider] to get to the data. In service fabric, once you hit a partition the data lives right there on that node. You will get much lower latency for reliable collection reads with Service Fabric.
b) Service fabric runs classic .net and .net core. Dapr is only for .net core projects. This is a big deal for users who have vast assets on classic .net
c) With external state provider, partitioning becomes a little murky on dapr. Service fabric gives you range and named partitions with replicas. I am not clear how dapr handles replicas or it delegates that to kubernetes or the underlying orchestrator.
d) Service Fabric roadmap: https://github.com/Microsoft/service-fabric#service-fabric-r...
What do other engineers think of this statement with regard to startups and MVPs? Am I wrong for thinking that monolithic architectures are still the best way to get started from 0? Is it simply a matter of better tooling to make creating a micro-service architecture as easy as spinning up a rails or django api?