What do other engineers think of this statement with regard to startups and MVPs? Am I wrong for thinking that monolithic architectures are still the best way to get started from 0? Is it simply a matter of better tooling to make creating a micro-service architecture as easy as spinning up a rails or django api?
Microservices have their place, but I really think monolithic architectures are far more suitable for the vast majority of cases.
Every time I've seen microservices in use in the enterprise space, it's been a complete mess - no documentation, incredibly brittle services written in a multitude of languages using just as many idioms, devs scared to touch anything lest they break it, no standardised logging... doing microservices "right" requires a highly disciplined team with processes and DevOps enshrined in the company.
In fairness, it's from IDC, not Microsoft.
The same report predicted that "500 million apps" will be created between over the 2018-2023 timeframe, which looks suspiciously like someone saying "100 million a year, what do you think Frank?" and then getting the nod because "a billion" is even more clearly absurd.
It will be made so trivial to do so it will be impossible to avoid it
Medium will be awash in howtos, github will have 9,000 CLI tools to generate yml templates for all this shit
Nothing stopping us from taking advantage of this and building local hosted Docker/Kube setups importing from github what they need to run private social media portals for friends and family, with dynamic DNS and other tools for finding each other
You just can't reliably work on one artifact with dozens of people without creating a lot of complexity, this is why microservice architectures can be better in some cases.
If you're not going to increase the development team beyond a single smallish group of people, you'll be much more productive and your service will be way faster as a monolith.
I'm pretty sure most SaaS companies want to scale their development teams, so it's basically a given that most want to use microservices as well.
Sounds like you are arguing that for small teams a monolith is better. No one would argue that they are not. If you read my post, I talked about business that become a SUCCESS meaning that they grow beyond that initial small team. Once your team becomes large, if you started with a monolith, you will eventually need to either break it apart into microservices or try to manage the monolith across many teams.
IF microservices could be as easy to develop from day 1 as a monolith, why would you start with a monolith? Isn't that what technologies like Dapr are trying to do? Make developing microservices easier?
Can we just create a Visual Studio extension that allows you to pretend the various components of your software are "microservices"?
"Sorry, you're trying to stuff too much functionality into your XYZ class. You'll need to break it off into another class if you want the Pseudo-Microservice plugin not to complain."
Following your logic to its ultimate conclusion, you may as well also over-hire on staff, get a super expensive office, crazy benefits packages, etc. Why not? You dont need that stuff if you think your business will eventually fail.
I beg to differ. It's completely straightforward, for the most part. You split them out, one by one. That's a big part of the reason for its popularity.
If it's really that difficult, maybe it's not a great fit for the problem at hand.
But then you realize debugging and refactoring is way harder, and you aren't getting any real benefits from splitting things apart like that.
If you are starting out, please make sure the complexity and development overhead for a microservice is worth it. Don't listen to hypes.
What on earth makes people think the responsibility divisions they make on day 1 are going to be the right responsibility divisions further down the line? Transplanting parts of your data model across boundaries is a complex and painful job.
I think that's exactly wrong. Microservices might end up winning because there will be no infra to set up. Microservices communication and scaling is a pretty generic problem: it is automated and cloud providers offer managed solutions (eg Serverless). In the end, you just deploy code.
At the other side of the room, monoliths always need some custom provisioning and deployment. If it has to evolve into more than a monolith (queue, workers...), it's that much more infra to setup and maintain.
I don't think people talking about microservices for POCs and MVPs are talking about maintaining your own cluster and handling scaling yourself. You go for managed solutions that allow you to focus on delivering value.
Also, as tooling improve, developing microservice is not that much harder than developing monolith.
The comment you replied to mentioned "startups that matter" that have significant scale and have 100+ engineers. I think if you're at that point it makes sense, but when you're finding product/market-fit and have 1-2 founding engineers working on a codebase, is it safe to assume micro-services are a premature optimization that unnecessarily add to a founding team's already heavy workload?
The only real reason for microservices is a big team.
Microservices should reflect your organisation.
A monolith is a small team.
It's not like this is a new and unknown opinion... It's been repeatedly pointed out for years now.
Wet dreams and bunch of propaganda.
If I understand it correctly, it will make it easier for me to build applications by separating the "plumbing" (stateful & handled by Dapr) from my business logic (stateless, speaks to Dapr over gRPC). If I build using event-driven patterns, my business logic can be called in response to state changes in the system as a whole.
I think an example of stateful "plumbing" is a non-functional concern such as retrying a service call or a write to a queue if the initial attempt fails. Since Dapr runs next to my application as a sidecar, it's unlikely that communication failures will occur within the local node.
There seem to be extensive, nice docs on the concepts behind Dapr: https://github.com/dapr/docs/tree/master/concepts.
I'd wager that Dapr's virtual actors  were inspired in-part by Microsoft's work on Orleans . I've read through some of the Orleans docs, and Dapr looks to be a more accessible (cross-language, non-CLR!) way to build using some of Orleans' concepts and capabilities.
!! this is super cool: "You can also perform aggregate queries across actor instances, avoiding the common turn-based concurrency limitations of actor frameworks .
I want to give it a try, but really seems that is pre-alpha.
How you deploy these services can be completely arbitrary. They can be split up in different methods, classes, assemblies, or entirely separate processes running on different servers.
Going straight to that last option is completely unnecessary for the vast majority. And when it's needed, it's actually because of team organization rather than app architecture.
I have seen services architected and deployed beautifully, http, tcp, MQ based, event driven, Ive designed and deployed services as windows services, and Linux based web services, http, API. Seen WCF, XML, rest, json, MQ, messaging across multiple stacks and technologies but I won’t back 90% of new apps feauturing microservices, more like 1 or 2 service endpoints, when what we actually need is governance and thought in these designs. Sometimes a service is a service but doesn’t need early optimisation. Think about decoupling and domains by all means but don’t jump into the next framework marketed at you.
What people really need to do is follow SOLID principles, use interfaces appropriately, and just generally practice good architecture.
The real benefit is the separation of concerns, which microservices promises to deliver. But well designed software does that too, and you don't have the headache of coordinating all that hardware.
I guess it's easier to sell something trendy to upper management (we're doing the new, hot thing) than to sell a code rewrite (we're not going to ship new features for a while as we opaquely push around code).
Kind of like a Redis, possibly with an etcd - all massaged into a structured/uniform Api? Is that about right?
Is it only for small (ie: overengineered) setups - or is the idea that it can grow to handle millions? of messages etc?
Normally "does it scale?" isn't very interesting - but in this case it would seem to be redundant overhead if it does not let you grow to a lot of concurrent traffic?
Dapr itself is a sidecar - it can be easily autoscaled by external autoscalers like Kubernetes offers, KEDA and others.
Eg: if I have a thousand users subscribed to a channel - only setup would involve dapr, and the rest would be dependent on my pub/sub queue manager (eg: plain redis, or rabbitmq) keeping up?
But from the [ed: dapr api microservice component] consumer viewpoint it's just "ask dapr for subscription", "ask dapper to broadcast a message" - and you're off to the races?
Yes, you are right: from the consumer's viewpoint, it's "ask to subscribe, ask to publish a message" and you're off.
(mind, I have no such ambissions presently, just trying to get my head around what dapr is - and is not)
If that's in any danger of happening, then you scale out not scale up. e.g. run 100 worker instances to process the messages. each one gets 0.1 gps of messages.
Microservices are good at scaling out as needed. I mean, not completely painless, but better than the alternatives.
So in the sense that you might not really need micro service architecture for your regular "new thing" (ie you can run it on a modern server with 128 threads and it fits in half a terabyte of ram or some such "medium" size workload) - it's interesting to see how dapr affects you if you do need to scale.
If you struggle with N to M messaging, you might not want to multiply N and M with Y dapr nodes-because that might make your solution more difficult to scale.
That said I think there's a valid case to be made that it can be valuable to suffer a micro service architecture and the possibly massive signaling overhead - not to "scale", but to gain resiliency and stability. Failover, easier deployment of new code etc.
Basically anything that can give you "works like big iron" on whatever scraps of commodity hardware you're able to rent.
That would be bad, but my understanding of the "sidecar" is that there would be M dapper nodes, each attached to a main consumer in a container. (and probably add N dapper nodes as well, if the N senders are also containerised and managed this way)
Definitely not as fully fleshed out but we made some choices on the serialization/pub-sub side that allowed for more efficient binary message-passing and solved some of the slow-subscriber problems for high-frequency data.
This is a name clash.
Is this true? Do they mean partially consuming microservices or the whole project will be broken up into microservices?
I would've been surprised if it was 50-50 by then.
Because your front-end is also a micro-service.