Hacker News new | past | comments | ask | show | jobs | submit login
Real World Micro Services (micro.dev)
189 points by asim on Sept 28, 2022 | hide | past | favorite | 126 comments




One of the biggest issues we face in the industry is the lack of reusability in software

Disagree with this core premise.

The biggest problems we face are that every developer is coached into believing everything they make needs to be unicorn scale and quality, that everything they build should conform to impractical levels of scrutiny and long-termism and that it is unavoidable for even basic development needs to have incredibly high levels of complexity in code / build / release tooling.

Microservices are just the latest weapon in furthering this agenda for the good of a tiny minority of people who value indulging their time and business costs in esoteric software philosophy goals.


Not sure how to approach that comment really, I don't think it matters if the pattern used is microservices or not, the basic premise that we're still way far off reusability of software that's basically identical across thousands of orgs stands. Most devs are rebuilding identical software stacks to serve a very small different business layer product. There's probably 80% of software that we could open source as reusable building blocks. Do they need to be microservices? No, but if the logic was in libraries and we had to reimplement per language it would be really a waste of time. Could we package this in some other way? Someone please show me that solution because I'll stop talking about microservices and go use that. Ultimately from what I've seen at companies like Amazon, Google, etc is reusability is about services development exposed as APIs with common libraries, frameworks and platforms for that development/deployment, itself, that's it. Its not buzzwordy, but microservices encapsulates that model. Amazon/Google, etc are just hundreds of small teams all using a shared model of development, so the argument that you don't need microservices at small scale is not the issue here, it's that we outside of orgs don't have this level of reuse that they do, so we waste most of our time spinning our wheels building the same software, where as if we had a set of reusable building blocks or APIs on top of cloud infrastructure, we could actually then go solve even more interesting ideas rather than it being something only possible at massive orgs.


And the premise that there can be reusability of software in that scale is the flow.

> Most devs are rebuilding identical software stacks to serve a very small different business layer product.

The problem is plainly because we don't recognize that the "Identical Business layer" does not exist. On the surface they seem identical but then each has its own features/prerequisites/peculiarities etc. And you end up with having to fight that "reusable framework" to handle all this because to be reusable it has to stay generic enough and the more generic something is the more useless it is for the concreate problem and more time consuming it is to reuse it.

In the golden era of Web Services the moto was to make reusable software same as electricity is reusable through standards (just plug in your device) forgetting that Electricity is reusable only because it doesn't really do anything. All implementation of the useful work is in the device you plug. And what's more these devices now are forced to work on non efficient standards, Hey I need just 2W of power to work but to interface with the Reusable thing I must waste 5w power to conversion. Fortunately we still don't have that kind of reusability in Software and hope we will never have to.


I work for a company that build embedded systems, commonly on a core family of processors. The way we get around this is to have an ever-expanding core set of reusable software that can be customized for each project. At the beginning of a project, you fork that core and add it to your project. If you identify a bug in the core, then you discuss it with the maintainers, do a PR and the fix gets merged back for everyone to use. Similarly for extensions that are likely to be usable across the company. Stuff that's only relevant to your project stays on your fork.

It's not perfect, but it saves an amazing amount of duplicated work. The problem is getting to be that the framework is getting big enough that people new to the company don't always know what's in it, and accidentally end up implementing something that already existed. We're still working on a solution to this.


The problem is getting to be that the framework is getting big enough that people new to the company don't always know what's in it, and accidentally end up implementing something that already existed. We're still working on a solution to this.

If you ever solve it, please let me know! Discoverability is hard.

Many years ago, we were trying to introduce a form of peer review process for the group I worked in, and we did a few exercises to see how well it worked in practice. I will always remember the striking result that around 50% of the issues raised during the first code review experiment were about reimplementing functionality that was already available somewhere else that the developer just hadn’t come across before.

I find this problem often happens with large standard libraries. Languages from Python to Haskell come with huge numbers of useful data structures and algorithms out of the box that can reduce a handful of lines of boilerplate to a one-liner with a standard name if you just use the provided tool. But of course, first you need to know that that tool exists, or at least have enough intuition that it might/should exist to check for it.

The best practical solution I’ve encountered so far is Haskell’s, where the strong culture around static typing means data structures and other important patterns tend to have explicit and well-known names, and where we have Hoogle, which is essentially a search engine that looks for known functions based on their input and output types. Need a function that checks if a given value is in a list but can’t remember what it’s called? Just search for `[a] -> a -> Bool` (Haskell syntax for a function taking a list of some type and a single value of that type and returning true/false) and you’ll soon find `elem`. Oh, and `notElem`. And their respective generalisations that work on types other than just lists, too.

In today’s world full of package managers and external dependencies and remote APIs and microeverythings, it feels like standardising this kind of search tool could have enormous benefits. It’s a problem that will always be difficult to automate away with tools given the fundamental limitations, but developers who know their own intent when writing code could certainly save a lot of time and make that code shorter and safer if we moved in this direction.


at the large scale sure. at the small scale no. For example, look at all the different implementations of JSONPath in all the various language that do exactly the same thing: take in some JSON, extract some data from a specific point in it based on a string input.

do we _really_ need to rewrite that every time we start a project in a different language that doesn't already have an implementation?

Or, could it be loaded/used via a service or other resource that's made accessible in some language agnostic way?


Spring cloud ecosystem is headed in this direction. Most of the common needs for micro service development such as Service mesh, service discovery, circuit breakers are getting incorporated into the ecosystem so as to reduce repetitive development. Not sure how many devs are actually using this.

They are building upon Netflix offerings.

https://spring.io/projects/spring-cloud-netflix


But then I look e.g. at your weather microservice, and see again a (configurable, but still expecting some already standardized format) wrapper around an existing API that once more mingles the data all around?

Sorry, but really not getting what this is achieving/solving?


Building a standard set of interfaces consumed as APIs regardless of what the backend implementation is. Something that exists on a network rather than as a library that can be executed and changed ad-hoc without having to upgrade thousands of binaries.

It also provides opportunities to do far more interesting things with the data when you have enough users at scale e.g predicting demand, caching data, etc.


What is the standard you built, how is it specced? In code that I just looked at?

Man I feel old and stupid, but really still not got it any further :( You made it even worse now.. "lives on the network.. without updating thousand binaries" - huh what, how?

You sound a bit like Urbit btw, not sure if appropriate ref though.


lol, really don't want to be in the same sentence as Urbit so please no.

Everything is defined as protobuf interfaces, which is a standard used by Google and everyone else now that gRPC is so dominant. So the idea is, define the API in protobuf, code generate and implement the handlers for it. The service can be called by other services on the platform using that code generation and then an API Gateway, which Micro provides can be used to call services externally using the same format but using HTTP/JSON.

To take that even further, M3O (m3o.com) codifies protobuf to openapi specs and then generates client libraries on top. You can see some of that in https://github.com/m3o/m3o.


Total tangent, but gRPC is not dominant and I think that belief is cargo-cultism.


Yeah I've managed to avoid gRPC for my whole career to date, and I've done a LOT of work on web services and microservices.


Thanks, that made now more sense. I'd put this condensed together with https://micro.dev/blog/2022/09/27/real-world-micro-services.... more prominently to the readme of https://github.com/micro/services ! Looking at that github alone makes it hard to commect the context.


Thanks I will do that!


>> One of the biggest issues we face in the industry is the lack of reusability in software

> The biggest problems we face are that every developer is coached into believing everything they make needs to be unicorn scale and quality

I kind of disagree with both of these statements. Reusability/DRY/etc. is just a useful technique or pattern. If you can do it, cool. If it's not worth it, factoring in complexity and cost, carry on.

Building something stable, reliable, and dependable, which is what I take "unicorn scale" to mean, is not necessary, it is simply good engineering. If you're thinking this is necessary for every project, you're not necessarily a bad developer, just a specialist, and the only real problem is if you don't realize that.

Software is a blend of engineering, art, and craft. Knowing what mix the current problem calls for is really the core pillar of being an effective developer. Too much engineering in your prototype and you've undermined it's flexibility. Too much craft in your production service and you've created a maintenance nightmare. Not enough art in your API and nobody wants to use it.


I don't take "unicorn scale" to mean "stable, reliable, and dependable" in the context of the original post. You seem to be a reasonable person that values building "stable, reliable, and dependable" systems. There is another segment of the tech world that have used the wrong tools and techniques for their domain based on experiences shared by the largest companies. This is often resume-driven development. It results in a lot of toil and rework for those maintaining those systems.

The challenge is that assessing outcomes is very contextual. There are teams that will be successful with a completely overengineered system built with what seemed like an unrealistic toolkit, and there will be teams that fail with an overengineered system and seemingly better toolset and team. For example, I'm aware of pharma teams using K8s and Svelte for small apps when a Django or .NET app would be sufficient.


Perlisism #74: “Is it possible that software is not like anything else, that it is meant to be discarded: that the whole point is to see it as a soap bubble?”

http://cs.yale.edu/homes/perlis-alan/quotes.html


Microservices with an api proxy were nice for me when I ran a small team because different subteams could use their preferred language, and set their own release cadence, and autoscale separately. It wasn’t so much about scale or quality as it was letting different teams independently build/deploy/maintain/scale different routes on the same public api server hostname.

We also got to centralize the read cache memory pool in the frontend, so each backend service is not potentially caching duplicate stuff, saving on memory across the org.


Microservices are usually a solution to an org. level scaling problem before a technical scaling problem*. It's easier to coordinate development resources around small codebases than it is larger ones. It's a human problem, not a technical one.

* Not always true of course


I disagree that they actually solve that problem. Instead, they bake organizational fragmentation into the tech infrastructure. This is one way to accomplish encapsulation, but it comes with a bunch of costs. The reason I think it doesn't address scaling is that delivering functionality that crosses org/service boundaries becomes more complex, potentially reducing scalability.


This is fairly true, except unicorn level. In a unicorn start-up where things happen extremely fast and your valuation skyrockets, it's more likely the code isn't exactly beautiful and tidy.

I understand where the author comes from. You want to use tools developed in company instead of being cut off when you leave.


Just basic due diligence about performance and knowing your storage engine can easily get you to and even at unicorn level.

Stack Overflow regularly bragged about how their eight servers and a few MSSQL replicas were basically always in a coma, even at peak hours. That's just good engineering.

Microservices themselves solve nothing out of the box.


Yea like you're part of this club that has exclusive access to something, you contribute to it, deliver value, see it grow and then it's gone when you leave. It exists within a silo and for the better part of a decade that's really irked me but I haven't quite figured out how to solve for that problem beyond doing it in a shared open source repo and a shared platform. I think the issue is it's bigger than any one person and you have to find a way to sell thousands of people on the idea. My starting point was code and now I wonder could I have approached this differently? Is there another path in which this would actually succeed? I'm still trying to figure it out and it's driving me crazy. Next I'll be writing a protocol no joke https://github.com/micro/network


So much time is wasted with DRY and abstracting everything. There's a time and a place for those, but they're usually treated as theology, the result often being much worse software than something that developers can understand without having spent years on a project.


This is a noble idea that unfortunately will not work. As a CTO of another small startup, I have no incentive to reuse those services for a number of reasons:

1. Trust. I need a strong vendor or community behind third party code which will issue security updates, have transparent roadmap that I understand and agree with and good paid licensing and support business model. It’s a high bar, but having some control over supply chain is important.

2. Architecture. Domain-specific services that integrate with some APIs are middleware: either I have to build some other middleware to integrate them with loose coupling, or I have to accept their integration contracts as a basis for my solution architecture. Both don’t look good to me.

3. Cost. If the problem is generic enough, why I should take only the code but spend money on DevOps to deploy it? No-code SaaS today is great and it’s all-in-one, being cost-efficient simply because it eliminates expensive internal maintenance effort. And it’s commodity today, there are often many alternatives for some problems, so vendor lock-in isn’t as scary as it was before.


I value your points because they are the same concerns I would have being a CTO of a company. Ultimately vendors don't yet care enough about this problem to invest in it and I think ultimately that's a mistake because while we have standardisation at the Cloud infrastructure layer, we're missing everything above it. The cost of development to an organisation in these services is quite frankly astronomical. You've got hundreds of devs rewriting identical CRUD services or proxy shims to existing SaaS across the entire industry. That's millions in capex just being burned.

In relation to being a CTO of a small startup, yea OSS maintainer risk is tough. You want to use projects that are used by hundreds of companies and actively maintained. In my case, I am the primary maintainer and it's used for a cloud service called M3O - https://m3o.com. I think it will take a while before we're in a place to warrant more buy in but my hope is eventually it'll get there or at the very least people will come to use the APIs serviced by M3O.

On architecture, I mean you're quite literally talking about software "build vs buy" tradeoffs for the entirety of all software you ever write. In this case, do I integrate something else or write it myself. I think that comes down to the same assessment of whether you should offload to some other piece of software versus your own. When it comes to domain specific services this is always tough yet we see the adoption of the likes of Twilio for SMS, Sendgrid for Email and Stripe for Payments so I'd argue we're getting closer to blurring the lines now.

On cost, you can use the hosted offering - https://m3o.com - but at this point the reason I'm sharing the open source services is really because I think that adoption curve to a cloud service takes a lot longer especially with domain specific services. I would argue these services while on the surface appear a commodity, the development time and integration cost of using bespoke independent APIs or services has a much higher cost. Everyone internally ends up writing proxy/shims to SaaS products to try eliminate this risk for themselves. I just think we should standardise a lot of our business logic service consumption.


A not-so-tiny nitpick: All of the code examples on m3o.com for DB access are a recipe for command injection. They all use a string with "columname == literal" format. That's near the top of all top-10 security don'ts lists..


Yea we are aware, there's a lexer/parser but ideally we should have some Query op and use prepared statements. I am probably looking for some additional help on that front in the near future.


Can you point us to the areas that you are looking for help on? If you save us work on looking for them and you might get a PR for your trouble.


Hey, I have a non-comprehensive list of issues for new services https://github.com/micro/services/issues but most of the bugs/features are still in my head for other stuff. I think purely on the DB front we need to add a new Query endpoint that's more powerful that will accept a repeated array for Filter options that look like the below.

message QueryRequest { repeated Filter filters = 1; }

message QueryResponse { repeated Record records = 1; }

message Filter { string field = 1; string op = 2; string value = 3; }

Something like that. I think otherwise a majority of issues really become clear through use, so those using the services tend to see where the pitfalls are or where they can contribute. It's harder to do that without context and arguably less enjoyable for the person contributing as they're not seeing that change in whatever they're doing.


Why would I use this instead of just using e.g. a postgres jdbc driver? How would I use this? A query compiler library (or service)? How do you do a join? Group by? CTEs? Window functions? Transactions? Bulk inserts/updates? Index and constraint management? Temporary tables? Why does the read endpoint have a hard limit?


Why not just use PostgREST for DB?


Just a desire to define an abstraction that's going to be consistent across all the services built. The value of a services catalog isn't just many separate services but the consistency and ability to reuse many in a coordinated fashion.


You know what though, I think we came a long way. We used to have to DIY our builds, custom Makefiles, custom CI/CD, VCS... Look at how much you can reuse today if you're on AWS for example, using tf and cdk.


Do you mean like 25 years ago? I've never had to do any of this. I did it, sure - but only because I wanted to, not because I had to.


> GitHub made a major revolutionary change for developers, enabling all of us to reuse libraries, and code through reuse rather than writing everything from scratch

This doesn't seem at all accurate.


Yeah, PEAR, CPAN, Maven and Ruby Gems all predate GitHub. And even before those there were other ways to share code. The author must be very young.

CPAN is from 1995.


Also DLLs, all of the Debian ecosystem, etc etc. So much incredible stuff (and DLLs too) that still exists, and powers most of what we build now.


Born in 84. I have been a user of all the above. The reason I mentioned GitHub to this level is truly in 2009 onwards it unlocked sharing of code at a level I had not seen before. It really did eclipse the reuse of software to what came prior and a lot of that has to do with the associated social network and network affects. They in a way game-ified software for a new generation. And it's not that we didn't share software before, just not to this degree and that's because every decade we rebuild the same ideas for a new generation that then 10-100x what came before. So something will come after GitHub as well. But today that's what we have and that really changed software for a generation.


I think Stack Overflow achieved the gamification and sharing of ideas much more than Github, and package managers such as NPM unlocked more and more reuse. Github has had some big benefits, but I think you're exaggerating quite a bit. Mercurial, SourceForge, etc etc were already around doing quite a lot of what early Github did, and while it's true that lots of projects transitioned from them to Github (it looked nicer, it was free and it was Git), it wasn't the sea change that some of the other platforms have brought.


To me it sounds like you mean GitHub put collaboration on a new level.

But reuse - no. I don't remember ever pulling a dependency from GitHub (stupid question, but is that even possible?), although I've been doing that for many years from Maven, NuGet, npm, CPAN.


Pulling dependencies from GitHub is standard usage for the package managers of Ruby, Go, Rust, and others. That is, they pull from Git, and it doesn’t have to be GitHub but it usually is.


Cargo does not pull from git or GitHub by default, you must specifically request that.


Sorry, yes, for Ruby and Rust I should have said “they can pull from git”. Point being, it is possible and when you do it GitHub is the most likely source.


Ruby gems (libraries) aren't pulled feom github. Yes, you CAN pull it from there, but most public and published gems don't come from there.


Don't you think immense availability is also detrimental ? crowdsourced ideas can lead to is_even.js too. I have a lot less appreciation for the recent years than most it seems.


I mean opinions are better than generic solutions so yea I think having X implementations of the same thing isn't of value but at the same time something as simple as github star count, forks, etc created a signal for what had momentum, impact and most use. Nothing is perfect but we find ways to overcome those issues. In reality we never end up with just one defacto standard but maybe ecosystems are born that end up being quite valuable.


I don't know, I see a lot of energy spent that's for sure but a lot of running in circles too. I miss centralization a bit (just a bit)


The way I remember circa 2010, GitHub made two techniques mainstream:

* constantly creating branches even for small changes because merges became so magical (e.g. in comparison with SVN)

* code reviews because they had nice visual tooling built-in

In my opinion reusing libraries came from the OSS movement in general. There were SourceForge&Co before github.


Git made cheap branches, rather than GitHub.

Code reviews... maybe? Outside of OSS you're probably right about them being less prevalent. Arguably a backwards step compared to XP though!


This feels very not-reusable to me. Guys has an idea that protocol buffers are the way to go. Cool but in many cases you really want to put a queue in front of things like email sending etc. And if you make email microservice that can't handle attachment it again isnt reusable.

I would be more in favour of having option to point every service at a rabbitmq queue to listen on and have a format of message that containes the delivery report queue to use when email is sent to notify rest of the system.

And here we come to conclusion, everyone has his own idea. This is who it is hard to make really good reusable micro services unless each of them becomes not so micro.

Nice that people try but i'm still in the camp of building from lego bricks provided in libs and creating my own microservices in a way I see them interconnect fit for given project. It might be protobuf, http or qeueue systems.


That's the vibe I was getting; these all feel like focused libraries, but using these in an application architecture feels like you would have a distributed monolith. You need resilient task queues to turn it into a 'proper' microservices architecture, which is honestly a much more challenging task than encapsulating concerns like sending emails or geocoding.

The other thing, as another commenter pointed out, is that these are nanoservices; I find it really difficult to imagine a situation where you would ever need 10x of one service for 1x of another, for example. It's a litmus test for microservices: do they have to scale independently.

A lot of these just interface with an external service doing the heavy lifting, so the performance requirements for each of these is low; little to no value in running them on separate, independently scalable VMs.


After quick look on code it looks to me more like nano services than micro services and nano services are basically antipattern. I would never host that and expect most of this functionality to be library/package to programing language I use for my projects.


There are some services that are still singular endpoint you're right e.g SMS. But the goal is to address immediate need and expand on requirements beyond it. No one using SMS has asked for more yet.

Other things are more full fledged e.g a db service that provides a http interface on top of postgres, or the user service with email verification and the app service that does hosting on top of Google cloud run.


It is not about number of endpoints - libraries also have multiple "endpoints" and some are used and some are not. It is about reasonable code packaging. Most of those "micro" service just wrap some single purpose libraries and expose them as REST endpoints. That is way they are more like nano services and not micro services. This lead to modern version of "dependency hell" but much more complex becasue now we have to deal with whole class of potential network problems.


Can you point to the ones you're talking about? A lot of the services are actually not that, they're offering simplified API and RPC access to data that's siloed elsewhere or systems that are quite complex to manage. There's some stuff we wrote for fun because writing arduously complex code can be tedious and I enjoy playing around with random things but I wouldn't waste a lot of my time wrapping libraries. OK so an "id" service is a bit like, maybe we don't need that to be an API call but let's say you do and you do want unique IDs distributed across many nodes, that was actually a hard problem solved by stuff like Snowflake [1] and distributed node management like zookeeper.

https://en.wikipedia.org/wiki/Snowflake_ID


Email service, Dns service (64 lines of code), File service, Image service - to name just some of them. Those are essentially little wrappers on standard libraries exposed as REST services.


So in a case like email it's a local shim to sendgrid. What this essentially means is you have one service that's storing your API key and gatekeeping access to sending email rather than all of them doing it independently.

In the DNS case is basically DNS over HTTP which might not be useful to you since you can implement it but to someone who doesn't coslde it's useful.

Image service is storage and serving on top of micro which arguably yea you can do elsewhere and the wrapper to imagemagick you can do yourself but we've found there's a ton of people who just don't want to reimplement this logic.


All of that is basically standard library over HTTP


Why is the standard library not a HTTP call? Maybe all of this stuff ends up in WASM, I don't know, but I just get the feeling like if it was all over a network that's constantly evolving and becoming faster, that we end up with something that looks more like a networked and emergent language. Programming languages are stuck in the old world of compile and run locally, but what if they weren't, what if they were networked and evolutionary.


We already have "standard library as an HTTP call." It's called modern javascript. Having your software depend on an infinite fractal of networked code that can dynamically wreck your shit at any arbitrary moment in time is a bad idea.


> Having your software depend on an infinite fractal of networked code that can dynamically wreck your shit at any arbitrary moment in time is a bad idea.

This is the most succint argument against kubernetes I've heard yet.


What you describe sounds familiar, I believe it was Akka's actor model (https://doc.akka.io/docs/akka/current/typed/guide/actors-int...) that allows actors to run on either the same machine or a different (cluster of) machines, but doing so completely transparently to the developer. Is that something that interests you? It means that the developer doesn't have to think about RPCs or consuming HTTP APIs in the first place.


> Why is the standard library not an HTTP call?

You mean like string concatenation? Come on man. As far as evolution goes, that’s what semantic versioning is for.


I think nano services follow the "Unix Philosophy". https://en.wikipedia.org/wiki/Unix_philosophy

It would be cool if it were possible to pipe the output of one these services into the input of another. Like how pipes work in Unix.


I built something like this using rabbitmq for automating delivery of online documents (taxes) for an accounting company. I had a queue per function, a DAG defining the workflow, and the tasks themselves would know who to forward the result to. Different libraries of tasks ran in different environments with different permissions (filesystem access ran on a windows server, for example, to access the file shares), only browsers had Internet access (and only to certain domains).

I eventually switched to a code monolith running in a segmented way with temporal.io, where I have libraries of activities which run in different nodes. You might be interested in their product.


I would love to be able to do that as well. I always thought this would be some sort of service composition or batch query model. Both Micro and M3O have CLI tools which can enable that so technically you can do something like that but natively would be WAY better.


My friend created something like this at kodou.io.


Hey, maintainer here. Happy to answer questions. I know some people are skeptical of such an idea working outside of an existing organisation or without large vendor support but had to take a shot nonetheless. These are built and hosted for an API platform called M3O so actively in use 24/7.


I like the idea and the website. I also like how you were able to stick to this project for the past 7 years.

Who is your provider for SMS?

Can we add our own APIs to M3O?


The SMS provider is currently Twilio. You can contribute services to the micro/services repo and they will be hosted on M3O. I welcome a sort of collaborative approach to that. We used to enable each user to have a *.m3o.dev subdomain to host their own but that product grew too slowly, so shifted focus entirely to serving through this one pipeline.


I guess it depends on what "real world" means here. These are all things that people would want in the real world, but this collection isn't built for the real world. Am I really going to build my business around a cache service with no meaningful operational documentation? I mean I guess if I need to also get NFTs and Islamic prayer times, this is a compelling package.

But really: much of this is either a) stuff that should be a library, even in a microservice ecosystem or b) could just be a static file (like the list of holidays).


I think the more interesting aspect of this is the framework being used: https://github.com/micro/micro

I haven't dug into it at all yet, but at a glance it looks like it's aiming to do something similar to what Go kit (https://gokit.io/) or Finagle (https://twitter.github.io/finagle/) does, where it gives you a nice abstraction for defining your "service" and then handles all the supplementary aspects (service discovery, serialization, retry/circuit breaker logic, rate limiting, hooks for logging, tracing, and metrics, etc) so you don't have to build those from scratch every time.

I don't know if any of those other frameworks could really be considered very "successful" outside the original organizations they were built for (it seems like the industry has bet more on service meshes and API gateway products), but I'd probably be more inclined to start with one of them than making a new framework.


I spent quite a few years working on a standalone framework called Go Micro which has now been donated to a community - https://github.com/go-micro/go-micro. Ultimately it never really achieved the potential standardisation I was hoping for e.g something like gRPC.

Micro is more of an all encompassing platform that addresses not just writing code but running, consuming it, securing it. I've been using it in production for 3-4 years now after a lot of pure OSS development. Still as others are saying it may never reach its true potential without the backing of a big vendor.


I love the idea and I also agree that there is so much duplicated effort but the difficulty of making the "one service to rule them all" is that there is pressure for it to be all things to all people, at which point, it becomes harder to use for a simple use-case.

Take the crypto microservice. Someone says, e.g. "I notice that it doesn't support Argon2". If it doesn't, I can't use it, I would need to write my own library. Arguments ensue about you don't need Argon2 or "why not contribute the change to the library?". Maybe the maintainer doesn't want to accept the PR because they don't think the library needs it or they are not sure that the quality is high enough to accept. The library gets forked internally and the fork takes on a life of its own.

I don't want to be negative but my experience is that people have a certain way of doing things and unless these services do it in that way, people won't live them.

There are other ways in which they wouldn't be able to be reused like if people don't know how to build/deploy Go or they are only looking for something simple but the fully featured libraries are too hard to understand.

Maybe having something is better than nothing but I think the saying is something like, "in trying to be all things to all people, you end up being nothing to any of them".

I really hope I am wrong though because the amount of duplicate work in the world depresses me as an engineer!


Microservices without events are useless. I'm tired of repeating myself and I'm tired of the buzz hype.

All this is, is putting you at a disadvantage, vs just a package that isn't hosted on its own.

I'm tired of seeing wrong theorems applied and sold as something good when in reality its bad.

Why this makes the top page of HN is beyond me


You seem exasperated, but it's not clear what point you're making. If you're saying that these utilities would be better off as libraries, what you get hosting them is language independence. With extremely light runtimes, the cost of RPCing to access some logic isn't that much greater than a library call. For things that aren't performance critical, the overhead might be fine.


> Microservices without events are useless

why?


It took me a while to realize if this is satire or an actual effort.


I also can't really take this seriously. Maybe I'm ignorant of something to the Go ecosystem that makes this relevant, but this seems truly bizarre.

The "answer" microservice says: "Instant answers to any question" / "Ask a question and get an instant answer". Okay? Like, is this a service that I would have to back with my own knowledgebase? In which case this is just a service that accepts questions and serves answers? Why would this need its own service then?

The "address" microservice says: "Lookup UK addresses by postcode. Simply provide a valid postcode and get a full list of addresses". Why is this not called "uk-address" then? Or can it be modified for arbitrary locations? Does it handle when new addresses are created? Can it do validation?

How much demand is there for a "joke" microservice? And I think it shows a lack of professionalism to seed such a microservice with something like this: https://github.com/micro/services/blob/master/joke/examples....


It's real code https://github.com/micro/services

It runs on software that's also real https://github.com/micro/micro

It's hosted on a cloud platform with real customers https://m3o.com


Me too. However, I'm also not all that familiar with the Go ecosystem. Maybe they don't have the concept of libraries?



A colleague of mine is currently working on a 20+ years old micro services system. It has more than 50 services that are all super busy sending messages to each other in a wonderfully chaotic jaw dropping WTF way. While solving really simple business problems. It is without a doubt the most complex and badly designed software I have ever experienced or read about in my 30+ years career. I am predicting that 10 or 20 years from now developers unlucky enough to have to take over micro services systems created today will curse the developers who today are inexperienced enough to think micro services is a good idea.


Real world micro services without tests?


Testing in production https://m3o.com


As others have said already, that's nowhere near enough. In general, I advise my teams to stay away from any Open Source projects that don't have a robust testing approach.

Toby Clemson's infodeck on Martin Fowler's site has a lot of information about testing microservices: https://martinfowler.com/articles/microservice-testing/

"Testing in production" doesn't necessarily mean "Test only in production". Cindy Sridharan has a nice article about the nuances involved: https://copyconstruct.medium.com/testing-in-production-the-s...


That's not nearly enough.


Probably not but users tell me when something isn't working. I spent my entire career in this sort of development model where 80% was good enough, put it into production and then moved on. When something breaks, fix it, when something is really complex test it, when the logic is critical mock it.


This is likely to be a core issue for most people. In reality it isn't "if it breaks, fix it", it's "if we notice it breaks, and we can remember what it should do, we can probably fix it". Tests allow you to spot when you break things, and also encode what they should do.


Tests also cost money to write and to maintain. It's a tradeoff. Personally I like to work at places where the tradeoff falls in favour of writing tests. But I understand there are some businesses that just don't have a lot of money and where the cost of things breaking isn't that high, and then the right decision might be to skip tests. And one might argue if the customers don't notice it's broken, does it really matter?


I agree they cost something, although I think they give back far more than they take. In this case though, I would say that not having them in place means people won't adopt these proffered microservices, as everyone will have to individually implement tests around them.


That must have been then completely early startups, web-focused? Even there cannot imagine how this could be.. just wow.


The related value is likely a service catalog and the common transports, not the services themselves, and that ship has sailed already with things like Kong Service Hub or the Swagger / OpenAPI spec [1].

A lot of people will likely not want to wait behind the open-source standardization effort on how to extend the service definitions for an API like email sending or file reading, e.g., does the file ACL go in the generic metadata map [2]?

There probably could be value for service definitions and a proxy that allow you to directly access a proprietary API but with a transport of your choice like gRPC which only happen to offer REST/JSON. But trying to come up with a standard API without collaboration among the largest of industry players will likely go virtually unused.

[1] https://en.wikipedia.org/wiki/Swagger_(software) [2] https://github.com/m3o/m3o/blob/fb7eb42512e4c7db49ab03da0b47...


Who is operating the micro service? What are the rate limits? Are there status pages? Who made the code? How secure is it? Can it scale?

Or are we supposed to clone all the repos and host ourselves? Do they all have the same deployment cicds?

A grouping of open source repos is useful. But there is a reason why I pay companies for “freely available” services when I have my own revenue on the line.


Right now I'm just sharing a set of open source services. A hosted offering exists as an API platform called M3O as mentioned in the post. Ideally just as infrastructure is now shared as cloud services maybe the same would occur here but not always a feasible starting point.


So... a whole new set of "reusable" components that must be vetted & audited? Why should these be trusted? Why should these be adopted and integrated into a team's ecosystem instead of their own work? I have a hard time seeing the true utility of this collection.


> GitHub made a major revolutionary change for > developers, enabling all of us to reuse libraries, > and code through reuse rather than writing > everything from scratch

Uhhh, pardon me? We were sharing code, even packaged libs, long before GitHub or even Git.


I think it's a good direction to provide standardized services for running multiple containers as a cloud application. The current state is: get some "legacy" (old school unix) server, pack it into a container (with some ugly shell hackery, albeit in a Dockerfile or Kubernetes manifest), provide configuration through environment variables or rewrite the old-school .conf files through more shell hackery. All state management of those servers is done through more shell hackery (select statements or nc for databases; nc or wget for http services; touch a random dotfile after a certain state is reached).


My perspective is that code reuse is a problem when "business logic" gets fused together with implementation details. When you evaluate bringing code into your project as a dependency, the primary blocker is all the operational assumptions and baggage that comes with the core functionality.

For instance, the assumption that everyone wants to use an HTTP request for every little query and deploy, maintain, and debug dozens of live services. Don't buy into the microservices thing? Then you can't reuse this code.


The main thing I would say is more and especially legacy companies should have clear open source policies & IP strategy to enable employees to quickly assess if they can build their work openly.


Even if they did we have no standardisation. I truly wish we had Android for Cloud. An open source operating system and framework that then let people publish software that worked seamlessly on it. I don't think docker and kubernetes went far enough.


We do have Android for Cloud. It's called any cloud provider. You can build for that provider and it will work.


Cloud providers right now look more like the PalmPilot's and Blackberry's of the world did.


I don't know. Maybe? Depends on what you mean.


I applaud Asim's effort. I can't wait to try some of these APIs out.

One of the APIs is for SMS. Any idea which provider is being used for these SMSs?

Also, I see some similarities with Rapid API. Another API sharing platform. It would be cool if other devs can add their own APIs to Micro's platform to make some extra loot.


Looks like twilio


I reassure you, we have the same observation in the field of hardware.

A lack of agility coupled with an inability to reuse code from embedded/edge project to another. We have also chosen microservices to apply it to the embedded domain with our open-source project Luos, but there is still a long way to go.


I have had these exact same thoughts - and have half-built attempts all over. (I'm not convinced APIs running in Go are the way everyone will want this - I get the argument but I suspect this is the way language ecosystems will evolve)

But this is looking far more polished - excellent. I am inspired.

Good luck !


a monolithic app with genservers is a better cost/efficiency ratio for getting to profitability.


no need to create a service for things that can be just a library '__')


Microservices are about reusability - at the "business logic" level. This framework and its services gets that exactly right. All the network stuff is fine and dandy but first you need to build actual features, which is where intentionally reusable microservices like these really shine.


What‘s with the masochistic drive of software developers to open source their code and assist in the commoditization of their craft? Why is this not seen in any other profession?

I kindly ask you to stop and keep your code proprietary and hidden, for the sake and salary of the average software engineer.


I don't know what other people's motivation is but for me after experiencing life at Google and elsewhere I really understood that to build anything at scale takes a multi-year effort building on the foundations already set. Google and Amazon don't get to be where they are by rewriting the entire stack every 2 years, yet as developers that's what we effectively have to do every time we leave a company. Only the large incumbents truly benefit from this compounding value and services development model. The rest of us just have to keep rewriting stuff from scratch and hope for opportunities to work at places with great technology.

Personally I feel like if we can open source building blocks that become the foundations for our development then we can build more interesting things on top e.g you're not going to build redis, postgres, etc again, but what are the things we build on top of them? We have a lot of open source infrastructure but not much in higher level categories. One day we'll get there, but it takes multiple people trying time and time again before we get there.


> but what are the things we build on top of them?

Countless other software developers and me build those things. If what I am currently building becomes available for free, as usable open source, I might lose my job. The company that let me go can increase their revenue and reduce their costs. You gain nothing, maybe some clout and fame.

Open source is harmful to the average developer. After years of this, we'd just be happy to reinvent StringUtils, EmailValidator and FileUtils classes every few years and take home our decent salaries.


So I'll start by saying, I've spent the past 7 years working predominantly on open source, 4 of those years were bootstrapped and the majority of stuff I've written is basically given away for free. I fully understand what that does to my own earning potential but at the same time as a developer at every single company I ever worked at, I came to rely on open source for quite literally everything I built. Whether it was linux, apache, nginx, docker or asterisk or whatever else. Our entire industry is built on the foundations of open source and without it we end up in these silos where only the incumbents thrive, that's how we end up with platform centralisation and inevitably struggle to break free with no alternatives.

I think now is the time for us to start moving beyond just open source infrastructure and into open source services. Yes as developers that means the things we write will change, yes some of us will have to reskill and learn new things but that's also just how evolution works in engineering. Whether its code or whether its cars, we, the current generation will be displaced by automation and large scale solutions.


Traubenfuchs' profile appears to indicate it's a GPT-3 bot.


I am sorry that you feel threatened by open source projects. An "average" developer isn't harmed by open software, quite contrary. We all benefit from it.

Your business might want to reconsider its specialization, if all of your "hidden" work often gets built by a student in their free time, on the other side of the world, for free.


We are all standing on shoulders of giants. I don't think there is any software developer who can credibly claim they have not benefitted from open source software one way or another. So one of the virtues of open-sourcing your work is about paying it forward.


hear hear


To be able to build something of actual value & worth. I don't want to spend time reinventing HTTP libraries, datetime libraries, regex libraries, language interpreters, parsers, image libraries, logging, metrics, etc. while trying to build SpaceX.

Times every 2-3 years as I switch companies.

A craftsman sharing blueprints for screws or instructions for wood joinery isn't putting themselves out of the business of constructing furniture.

SWE is so far down at the bottom of the hole of having the tooling we need we could do this for a few more decades and there'd still be oodles and oodles of work to do. Right now we're digging tunnels with spoons.


Does that mean your servers don't run any Linux distro and that you compile your code purely with proprietary compilers? It's not about stealing someone else's job, it's about helping the state of the art grow naturally. So much stuff we all rely on every day has been built on top of previous work of someone else.


Wow! Incredible work. Is this the work of a single person?


It is the work of a number of people over a few years. Some of those people have now moved on but what's left is their contribution.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: