Hacker News new | past | comments | ask | show | jobs | submit | page 2 login
Monolith First (2015) (martinfowler.com)
820 points by asyrafql on Feb 19, 2021 | hide | past | favorite | 340 comments



> It takes a lot, perhaps too much, discipline to build a monolith in a sufficiently modular way that it can be broken down into microservices easily.

have seen that happening. importing modules from all over the place to get a new feature out in the monolith. also, note this happened with rapid on boarding, the code-review culture was not strong enough to prevent this.

so the timing of the change from monolith to micro-service is critical, in order to get rid of it. otherwise, chances are high you got a monolith and microservices to take care of.


I like the "peeling off" strategy - starting with the monolith and adding microservices from the edges.

I've been working on something that tries to make starting with a monolith in React/Node.js relatively easy. Still don't have much of the "peeling" support but that is something we're looking to add: https://github.com/wasp-lang/wasp


I heard so many horror stories that some company tried to break up a large monolith to microservices and after X years of working on it, they give up. For example, in monolith you can use transactions and change millions of rows. If something goes wrong, the entire transaction will be rolled back. If you use microservices, you can only use transactions within a single service.


The main takeaway for me is clean modularity in a system with strong decoupling, the modules can be in a single application boundary (a monolith) or across multiple (services or microservices).

The design challenge becomes making sure that your modules can execute within the monolith or across services. The work to be done can be on a thread level or process level. The interfaces or contracts should be the same from the developers point of view. The execution of the work is delegated to a separate framework that can flip between thread and process models transparently without extra code.

This is how I approached my last microservices project. It could be built as one massive monolith or deployed as several microservices. You could could compose or decompose depending on how much resources where needed for the work.

I fail to understand why techniques around these approaches aren't talked about in detail. They have some degree of difficulty in implementation but are very achievable and the upsides are definitely worth it.


This definitely sounds like an interesting approach.

I think however defining these modules and the interfaces between them is the hard part. Part of this work is defining the bounded contexts and what should go where. If I understand DDD correctly this shouldn't be done by «tech» in isolation. It's something that's done by tech, business and design together. This is hard to do in the beginning – and I would argue that it should not be done in the beginning.

When starting out you've just got a set of hypothesis about the market, how the market wants to be adressed and in which way you can have any turnover doing it. This isn't the point when one should be defining detailed bounded contexts, but should instead just be experimenting with different ways to get market fit.

Edit: typo


There should be a name for it. If a name is established so people can talk about it easier, it would make a big difference. This is a kind of design pattern, although not an object-oriented pattern.


Could you elaborate on the framework you mentioned for flipping between process and thread models? That sounds interesting. Was it released or just used internally for some projects?


Like EJB that had remote interface, but you could also call the code directly without network in between?


A bit of a shameless plug, but really looking for feedback .. I've been experimenting with Inai [1][2], a framework that can help build modular microservice-like software with similar Dev team independence properties but can independently be built and deployed as a monolith or as separate services operationally. So far, I've had fun building an internal project at much higher speed than I've managed any similar project and had more fun doing it. I feel the idea (which is still nascent) has some merit, but would like to know what folks think.

[1] source - https://github.com/Imaginea/Inai

[2] blog post describing Inai - https://labs.imaginea.com/inai-rest-in-the-small/

PS: I've had some difficulty characterising Inai (as you can tell).


In my main side project what I've done is separate different conceptual components in clojure libraries that I then import into my main program (clojure enables you to seamlessly require a private github repo as a library).

This way you get most of the benefits of separating a codebase (such as testing different things, leaving code that barely works in the repo, pointing to older versions of part of the monolith, smaller repos, etc.) whilst integration between modules is a one-liner, so I don´t need microservices, servers, information transformation, etc.

There's even the possibility to dynamically add library requirements with add-lib, an experimental tools.deps functionality.


This is an excellent approach that I wish was more well known/common. We use this first before going to a complete service, which becomes easier anyway once you have a few libraries that had been in use and tested by then. Works great for our clojure code and other languages.


I always use monolith when building my servers. Only split microservices part when it is really required. Particular example:

I've built native C++ server that does some complex computations in some business domains.

When this monolith exposes generic JSON RPC based API where agents (human or software) can connect and do various tasks: admin, business rules design and configuration, client management, report design etc. etc. based on permission sets.

Now actual business client came. They have their own communication API and want integration with our server and of course being "customer is a king" they want communications tailored to their API. I was warned that this customer is one of many more that company is signing deals with. Not gonna put Acme specific code into main server.

This is where microservice part comes. Just wrote a small agent that mediates between client API and ours. Not much work at all. Come another client I can add more specific parts to this new microservice, or give it to other team to build their own based on first one as a template.

Physical scalability - I do not worry about. We re not Google and will never be due to nature of the business. Still main server can execute many 1000s of requests/s sustainably on my laptop (thank you C++). Put it on some real server grade hardware and forget about all that horizontal scalability. Big savings.


I, like many others, have been saying this for years. Too bad I didn't see this link before, it would have helped convincing a few people in the past. Now, fortunately, it doesn't seem that much of a heresy to say that monoliths (or SOA) is the right architecture a lot of the time (maybe most of the time).

I remember, a little more than 4 years ago I was brought on to save a product launch for a startup. They were, as far as I can remember, already creating the 2nd iteration/rewrite of their MVP with the first one never released and just few months short of doing their first release one of their developers started to convince everyone that the code base was a pile of crap and that they need to rebuild the whole thing is microservices. Because there is no other way around. The system was built in PHP and the otherwise smart and motivated developer wanted to rebuild it as a set of JS microservices.

He almost managed to convince the management, including the devlopment lead and the rest of the team. And it wasn't easy to convince him that that would have been a pretty dumb move and that it wouldn't have just taken a few more months, just because creating a 'user service' with a mongo backend was something he could pull of in his spare time.

Then I realized that there is something inherent in these kind of situations. Some people come around stating that a new piece of technology (or other knowledge) is better and then suddenly the rest of the world is somehow on the defense. Because they know something the rest don't so how could you prove them wrong? Not easy. And funny enough, this is actually a classic case of the shifting of the burden of proof fallacy.


I have worked on both types of applications and what he says is very true. The startup which failed to launch because of difficulties in getting the architecture and infrastructure right. It was around 2014 and the company company eventually ran out money and more importantly missed the time frame. The second product was a massive enterprise product that had 100s of members working on it on any given day. The product was written in Java as a monolith. It was slow to develop and a pain to work on. But it worked and the product succeeded. People complained that it was slow and they were right.

So they decided to break it up into multiple services. First was a pdf report generation service which I wrote in node.js. Then more and more services were added and other other modules were ported as node apps or separate java apps. In the end it was around 12 services and all of them worked well.

The monolith was still there but it's fast enough and users were happy. That's a lesson I'll never forget and have understood that time and velocity are far more important for a product to succeed!


If you design for testability and use proper dependency injection your "monolith" will already be "ready" for micro services.


From my experience the issue with monoliths is organizational and cultural first, then technical (maybe). Monoliths are usually managed by 1 team (IT for instance). This 1 team has never had to collaborate with other teams on building/configuring/improving the monolith. Centralizing things works well for stability but it is horrible for innovation. For instance, ERPs like SAP are monoliths, how do you inject, say, an AI algo to improve productivity on managing invoices into it? That team pushing for that AI is probably super close to accounting but probably far from the SAP IT team. The SAP IT team is incentivized on stability while the accounting one on productivity, how do you resolve the conflict? How do you work together and collaborate? I am sure microservices have many issues, but they make these conversations not just easier but necessary. I think this is the biggest advantage of microservices.


We are in the middle of the first production-scale deployment of a greenfield microservice-based application now and there are certainly a lot of pain points. On the flip side, I've been a part of about half a dozen extremely successful strangler migrations[0], some of which were downright easy even with a complicated business domain. I often wonder if we would have been better off deploying a monolith in half the time and immediately starting a strangler migration once the rate of business changes slows down. I've become more and more convinced over the past decade that Monolith First + Strangler Migration is most stable, safest way to deliver mid-sized software projects.

[0] https://microservices.io/patterns/refactoring/strangler-appl...


I've worked in two major microservices environments. The one was a new application where everything was sloppily wired together via REST APIs and the code itself was written in Scala. A massively weird conflict there where the devs wanted a challenge on a code level but nobody wanted to think about the bigger picture.

The other was an attempted rebuild of an existing .NET application to a Java microservices / service bus system. I think there was no reason for a rebuild and a thorough cleanup would have worked. If that one did not move to the microservices system, the people calling the shots would not have a leg to stand on because the new system would not be significantly better, and it would take years to reach feature and integration parity.


I recently got introduced to the Laravel framework in php. I think it's probably the best implementation of a monolith web framework right now. Php is very simple if you come from c/java and the framework provides you with so much functionality out of the box.


The question isn't microservices or monolith. The question is: What problem are we solving, and what's the most efficient way to solve it?

Requirements should dictate architecture. Data should dictate the model. Avoid unnecessary work. Be mindful of over-engineering.


There is no one size fits all solution here. In my experience it is sometimes good to start (conceptually) as a monolith and then divide the service into Microservices, eventually. Also depends on the maturity and experience of the team handling the services because micro services do like a different temperament both to develop and to manage.

I'm not from the camp that believes in creating services for every single thing - It's the same camp that believes in starting with distributed databases and then moving in the same direction. I believe in PostgreSQL everything and then moving to distributed only if the application demands ... Wait did I just start another war in here !


A lot of people are saying microservices are great if you do it right. What they miss is they require a whole bunch more people than a monolith to operate, they ignore the cost of adding those people. Really it's like comparing apples and oranges, sure if you have the resources and the headcount to have 37 and a half teams each owning only one piece of your app in a competent way go for that architecture, but if not then stop advocating for a unicorn that only large companies with big budgets can benefit from and preaching it to startups that are struggling to get off the ground as if that should be the industry norm.


I just wonder what kind of organization or project faces this kind of decision.

I've worked on workflow/crud-type web applications whose computational needs could be met using a single server with a couple of GB of RAM using a single database, indefinitely into the future. I don't see why it would occur to someone to split one of those into multiple services.

I've worked on systems for which many such servers were required in order to provide the expected throughput and, in such a case, writing a monolith is really not an option.

Is there a significant middle ground?


Thank you Martin. I have argued against micro services for writing systems (until you really need them) for a while and get real pushback.

For me, the issue is that a monolith based on, for example, Apache Tomcat using background work threads let’s me also handle background tasks in one JVM. I have not used this pattern in a long time, but when I did it served me well.

I think Java and the JVM are more suitable for this than Python and a web framework. I tried this approach with JRuby one time using background worker threads but I wouldn’t use CRuby.


Now that Fowler himself wrote about it I hope that the masses follow. Next I hope that the trend of going vertical-first comes back. Mr Fowler, could you please write an article on vertical-first scaling?


Lack of engineering talent + accessibility of the cloud + buzz words got us here. Looking forward to the end of this cycle.

This is somewhat reminiscent of the abuse of higher level languages and the mindset computers are so powerful there's no need to think too hard. However, the consequences are no longer limited to a slow and buggy program but many slow and buggy programs and a large AWS bill too!


The article is from 2015, so doesn't look like this had much impact!


Micro services is a networked/cloud buzzword for The Unix Philosophy. Doing “one job and doing it well” implies, though, that you know what that job is, what the right tool is, and that you know how to do it well. Another word for “monolith” could easily be “prototype” for the sake of this article. If only we all had the time and money to do real software engineering, amirite? But, time and time again I’ve been proven out that “the hack becomes the solution” and that we’re not going to go back and “fix” working code.


Your first and biggest problem when you start a technology startup is lack of customers and revenue. Do everything that gets you to the first customers and first revenue, and do everything that helps you find a repeatable & mechanical distribution strategy to reach more customers and more revenue.

Then and only then should you even consider paying back your technical debt. Monolith first always until you have the cash to do otherwise, and even then only if your app desperately needs it.


Microservices are hard. Monoliths are hard too. Focus on the product and the customer, build what they need. Architecture is a means to make a successful product.


Agreed and I'll add that the architecture must evolve into its most efficient form in tandem with the success of the product.


I'm wondering if "microservices" hasn't lost it's initial meaning. To me, an application composed of the "core", a message queue and a database server already qualifies as "microservices". I know which parts are "delicate" and which parts are "kill-at-will". That is what I care about at a very high level, not having a separate codebase for every single HTTP route.


The fact that we talk about it raises some questions. Surely it depends on the problem at hand. There are multiple companies solving various issues and those are either global or local, well defined or ambiguous. There is no one approach as it depends on where you stand at the moment as well. Another problem is developers themselves as they are incentivized to push for "over-engineering" as it'll make their CVs look better.


I question the relevance of this with cloud based solutions such as AWS Lambda, where you can spin up a production ready auto-scaled service very quickly; it certainly reduces the cost of ownership and operations. Sure if you are a team of 20 developers working on an app - go monolith.

But if you are a 300 person organization launching something from the ground up, I would choose many serverless solutions over a single monolith.


> But even experienced architects working in familiar domains have great difficulty getting boundaries right at the beginning. By building a monolith first, you can figure out what the right boundaries are, before a microservices design brushes a layer of treacle over them.

I think this is one of the most important points. Often it takes time to figure out what the right boundaries are. Very rarely do you get it right a priori.


"Monolith vs microservices" is a bit like "Fullsize SUV vs multiple sedans". They are used differently, so you should pick that which fits your purposes. And the idea that you have to do one or the other is a bit ridiculous. I've seen plenty of businesses that organically landed on a mix of monolith and microservice and things in between. Don't get caught up in formalism.


I wonder what would the trend be without the push for the huge megacorps that couldn't do without microservices (MAGA mostly) that decided to start selling their cloud services.

Would they have kept them for themselves (including k8s), would microservices be just as fashionable? I'd really like to know the answer, while watching my friends startup of 4 people running they 14 microservices on a k8s cluster.


A microservice architecture should mainly be used to solve runtime issues, such as to improve scalability or fault tolerance. Also, it can be useful in order to adopt new languages or frameworks.

It is not something that should be used simply to clean up your code. You can refactor your code in any way you see fit within a monolith. Adding a network later in between your modules does not make this task simpler.


I tend to take a “hybrid” approach. I believe in modular architectures, as I think they result in much better quality, but the design needs to be a “whole system” design, with the layers/modules derived from the aggregate.

Once that is done, I work on each module as a standalone project, with its own lifecycle.

I’m also careful not to break things up “too much.” Not everything needs to be a standalone module.


If we were to ignore the mechanism part of microservices, we could say that qmail and postfix have a microservices architecture. Both of them have fared much better than monilithic Sendmail. And, their track records for resilience and reliability are very encouraging too.

There exist other ways of designing 'microservices' that are not necessarily conventional monoliths!


Surely a radical viewpoint against the (current) state of the art in software architecture?

Why swim against a tide of industry “best practice” that says ...

... Lets make our application into many communicating distributed applications. Where the boundaries lie is unclear, but everyone says this is the way to produce an application (I mean applications), so this must be the way to go.


This is the never ending cycle of "too much order" - "too much chaos". It takes a lot of experience to be able to judge how much chaos you want. That is all.. experience. I don't think any theoretical theory can tell you how much or how little is right


Monoliths are even easier to manage in 2021 because of workspace dependency management (e.g. yarn workspaces, cargo workspaces) etc. You can have your cake (microservices built as separate packages) and eat it too (a monorepo workspace w/ all your code).


A monolith designed using Component Based architecture where each of the components has a well defined service boundary, can easily be split up into a microservices architecture. Each of the components from the monolith making up a subsequent microservice.


(2015)

I am very interested in what he could add now.


I don't understand why people think that microservices are monoliths are 2 only options.

https://eng.uber.com/microservice-architecture/


I take this same philosophy at all levels of my code. It's like the big bang: start out with a dense hot lump of code and as it grows and cools off things break apart into subunits and become more organized, documented, standalone, etc.


Modular First.

You start out as technically a monolith, but that is prepared at all times to be decomposed into services, if and when the need arises.

It's nothing too fancy - can be simply another name for Hexagonal Architecture, functional-core-imperative-shell, etc.


it's surprising to be _so_ disagree.

The value of microservices is to isolate risks and have some manageable entity to reason about, to have an understandable scope of lifecycle, test coverage and relations with other services. The larger the piece, the harder it is to do.

Splitting up is normally harder than building up, and often impossible to agree on. Any monolyth I worked on was coupled more than necessary because of regular developer dynamics to use all code available in classpath. I've never even saw a successful monolyth split - you just usually rewrite from scratch, lift-n-shifting pieces here and there.


Monolith First.

The point of this idea isn't that it says "monolith"; it's that it includes time as a factor. Too much of our discussion focuses on one state or another, and not on the evolution between states.


> The second issue with starting with microservices is that they only work well if you come up with good, stable boundaries between the services - which is essentially the task of drawing up the right set of BoundedContexts.

> The logical way is to design a monolith carefully, paying attention to modularity within the software, both at the API boundaries and how the data is stored. Do this well, and it's a relatively simple matter to make the shift to microservices.

This is the big take-away to me -- for a long while now I've seen this whole monoliths-vs-microservices debate as a red herring. Whether your conduit is functions in shared virtual address space, HTTP or a Kafka topic the real problem is designing the right contracts, protocols and interface boundaries. There's obviously an operational difference in latencies and deployment difficulty (i.e. deploying 5 separate apps versus 1) but deep down the architectural bits don't get any easier to do correctly, they just get easier to cover up (and don't sink your deployment/performance/etc) when you do them badly when there's less latency.

What we're witnessing is a dearth of architectural skill -- which isn't completely unreasonable because 99% of developers (no matter the inflated title you carry) are not geniuses. We have collectively stumbled upon decent guidelines/theories (single responsibility, etc), but just don't have the skill and/or discipline to make them work. This is like discovering how to build the next generation of bridge, but not being able to manage the resulting complexity.

I think the only way I can make the point stick is by building a bundle of libraries (ok, you could call it a framework) that takes away this distinction -- the perfect DDD library that just takes your logic (think free monads/effect systems in the haskell sense) and gives you everything else, including a way to bundle services together into the same "monolith" at deployment time. The biggest problem is that the only language I can think of which has the expressive power to pull this off and build bullet-proof codebases is Haskell. It'd be a hell of a yak shave and more recently I've committed to not spending all my time shaving yaks.

Another hot take if you liked the one above -- Domain Driven Design is the most useful design for modern software development. The gang of 4 book can be reduced to roughly a few pages of patterns (which you would have come across yourself, though you can argue not everyone would), and outside of that is basically a porn mag for Enterprise Java (tm) developers.



Using Haskell style IO types - couldn't we lift and shift anything at build time between a network call and a function call? Monolith that could shard at any function boundary into microservices.


If starting from scratch, lets say a startup, it's still true that the vast majority of all software projects fail. Therefore optimising things prematurely is not a good use of time or money.


> I feel that you shouldn't start with microservices unless you have reasonable experience of building a microservices system in the team

Well, yeah... obviously. That's not the same thing as it being bad to start with microservices generally.

I just don't agree with this at all. The designer of the architecture clearly does need to know how to design service-based architectures and needs to have a very strong understanding of the business domain to draw reasonable initial boundaries. These are not unusual traits when starting a company as a technical founder.


What if those boundaries change because an initial assumption turns out to be wrong?

In a monolith, no biggy. With microservices, huge pain.

Moving to microservices is something you do to optimise scability, but it comes with costs. A major cost is the reduction in flexibility. Starting a new not-very-well-understood thing needs massive flexibility.


I think we've found in many situations that the microservice boundaries make it easier to change things. We were pretty careful in making fairly conservative choices initially though, and split some of the bigger systems up more over time.


> These are not unusual traits when starting a company as a technical founder.

Its actually not unusual to have technical cofounders who have no prior software engineering work experience.


I see two categories of technical founders.

1. The people who have many years of experience building a highly specialized application using domain knowledge gained at their previous job.

2. The new CS grad right out of college who had big dreams, a lot of time, and high risk tolerance. Unclear as to whether they're starting up the company because they failed to get a job elsewhere or because they think they're about to build the next Facebook.


I'm definitely not in either of those camps, more mid-senior level experience with highly specialised domain knowledge in a completely unrelated field (neuro/QEEG feedback training before, now fintech/insurance)


Conway's Law is king


The big curse of microservices is the word "Micro". It should have just the right size, be it big or small.


Brilliant article Martin and brilliant discussion Hacker News. This is why this forum is still the best.


I worked on a project that was a monolith. A team was placed to rewrite it to microservices and shortly after I quit that job. About a year after that I ate lunch with a ex-colleague that told me they were still working on that rewrite.

Looking at the page now it looks like the monolith is still running as far as I can see, about 5 years later. I guess they gave up. :)


Was the rewrite to microservices really the problem here? I’ve worked on a few such projects too, where we decided to rewrite the application, but it never ended up replacing the old existing one. Technology was never the issue in all these.


Most likely not, the new guys were phd-types that was ultra smart but did nothing except have meetings. I think they were over-engineering it completely which made it impossible to deliver something of value.

However, I think that is often the case of a microservices architecture.


> Most likely not, the new guys were phd-types that was ultra smart but did nothing except have meetings.

There's your problem, regardless of architecture, it would most likely have been over-engineered anyway.


hah, I made a video on the same vibe. https://www.youtube.com/watch?v=clagrT5BC7g

Martin is definitely ahead of the curve, or perhaps I'm behind.


Microservices primarily solve a team scaling problem. Then comes technical.


The question is from what team size on microservices make things easier. I bet the number is much bigger than what people think.


We started with flask(https://flask.palletsprojects.com/en/1.1.x/) and never looked back. It enables pure, iterative, incremental development.


Article is from 2015. Should this be added to the title?


To make microservies you just first build a monolith


He can write all the posts about software engineering practices he wants his server is still down right now :)


Gall's law.


my TL;DR of the entire article:

> Although the evidence is sparse, I feel that you shouldn't start with microservices unless you have *reasonable experience of building a microservices system* in the team.


[2015]


Architecting systems is a dance around tradeoffs: What weight do you you assign to each of the 'ilities'. Some products/services are inherently complex. For those, complexity can not be destroyed.Just transformed from one form to another. I agree that there in many cases people jump on the micro services architecture too soon, without first developing a mature understanding of the domains involved in the application as well as data flow and state transitions. Some years ago I was involved in re-architecting a monolith. This was a team that cherished testing, had modules and layers in the monolith , swore by DRY and so on. There were a few problems: * Adding features was an expensive exercise. For all the layers in place, the abstractions and interfaces were evolved over 5 years and not necessarily with enough design upfront. * Performance was poor: The monolithic architecture was not necessarily to blame here, but rather using a document oriented data store and doing all the marshalling/unmarshalling in the application code. The reasoning was that 'we can store anything. we can interpret it any way we like'. In practice, the code was re-inventing what relational databases were designed for.

I proposed and designed a micro services architecture with the team. I had done that sort of thing a few times even before they were called micro services. There were a few advantages: * The product team had the freedom to re-think the value proposition, the packaging, the features. It was critical for the business in order to remain competitive. * The development team could organize in small sub teams with natural affinity to the subdomains they had more experience/expertise in. * Each domain could progress at the pace that fits, with versioned APIs ensuring compatibility. Not necessarily a unique micro service success prerequisite. One can argue versioning APIs is a good best practices even for internal APIs but the reality is that versioning internal APIs is often less prioritized or addressed to begin with.

There are technical pros and cons for monolith/MS. Additional data that can augment this decision is the org structure , the teams dynamic and the skillsets available. In the case of that project, the team had excellent discipline around testing and CI/CD. Of course there are challenges. Integration testing becomes de-facto impossible locally. Debugging is not super difficult with the right logging, but still harder. One challenge I saw with that project and other projects that adopt microservices is that the way of thinking switch to 'ok so this should be a new service'. I think this is a dangerous mindset, because it trivializes the overhead of what introducing a new service means. I have developed some patterns that I call hybrid architecture patterns, and they have served me well.

One thing to consider when deciding what road to take, is how does the presentation layer interact with the micro services. When it is possible to map the presentation to a domain or a small subset of the domains, the micro services approach suffers less from the cons. A monolithic presentation ---> Micro services backend could severely reduce benefits.

2 good references here: 'Pattern oriented software architectures' 'Evaluating software architecture'


......


Thank you, Martin.


Monoliths are solution in search of problems. Micro services are solution in search of problems.

Generally, focus on solving the problems first.

Can the problem be solved by some static rendering of data? Monolith is better solution.

Does the problem require constant data updates from many sources and dynamic rendering of data changes? Micro services and reactive architecture are better solutions.

There’s no one size fits all solution. The better software engineers recognizes the pros and cons of many architecture patterns and mix match portions that make sense for the solution, given schedule, engineering resources, and budget.


I don't understand what you mean by "static rendering" vs "dynamic rendering", but I suspect I wouldn't agree on that.

Monolith vs microservice IMO depends more on team size/structure, stage of the project (POC/beta/stable/etc.), how well defined the specs are, … Rather than the nature of the problem itself.


Was hoping for something 2001-related.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: