Hacker News new | past | comments | ask | show | jobs | submit login

What the article says is true for now, but it doesn't mean it will always be true.

Making and transporting large milk bottles is very efficient today (because it's all automated and there are well-tested processes in place), but it wasn't necessarily always like this. When people where still figuring out how to make glass bottles by hand (through glassblowing), bigger bottles were probably more challenging to make (more prone to flaws and breakage during transportation) than small bottles. So probably they just figured out what the optimal size was and just sold that one size.

With software, it's the same thing, we don't currently have good tooling to make building scalable software easy. It's getting there, but it's not quite there yet. Once Docker, Swarm, Mesos and Kubernetes become more established, then we are likely to see the software industry behave more like an economy of scale.

Once that happens, I think big corporations will see increased competition from small startups. Even people with basic programming knowledge will be able to create powerful, highly scalable enterprise-quality apps which scale to millions of users out of the box.




That's the 90's calling back with the 4th gen language that were supposed to drive the developer out of work. Before that it was COBOL, a simple enough language that does not require programmer.

Automation happens a lot in the Software Development world, but instead of depriving the developer of work, they just pile on the shoulder of the developer. For example, today, with AWS/Docker/TDD/DDD/... I basically do the work that would have taken a team of 5 people only 15 years ago.

The thing is there is always going to be somebody that sits at the limit between the fuzzy world of requirements and the rigorous technical world of implementation and those people are going to be developer (of course they will not be programming in java, rather in something else, but rigorous enough that the activity is still called programming)

Unless AI takes over, but it probably means that work as we know it has changed completely.


You seem to be confusing deploying software at scale rather than building software at scale. In fact, there are two different types of scale involved here. I can build a 1 page application and deploy it out to billion people.

The article is talking about large software, not small software deployed to scale.

As for tools to help build large software, we have them in spades and they will continue to improve. But some things still don't seem to scale. Tools help with managing 500 developers but not enough to really make 500 developers as effective as 50 developers on a smaller project.


More than two scales in fact, I can think of three: scale of firm, scale of distribution, scale of product.

In most industries scale of firm goes hand-in-hand with scale of distribution. Software breaks the paradigm, so we have to be careful to say which scale we're talking about.

Obviously, with distribution, software has insane economies of scale, since we can copy-paste our products nearly for free. That's why we can have small firms with a large distribution, unlike most industries.

With scale of firm, we face some of the same diseconomies as other industries. Communication and coordination problems grow superlinearly with firm size.

Effort and resources needed also grow superlinearly with product scale. That's also true of other engineering disciplines though. Making a tower twice as high is more than twice as hard. Part of it is the complexity inherent in the product, and part of it is that a more complex design needs a bigger team, so you run into the firm diseconomies of scale mentionned above.


> Once Docker, Swarm, Mesos and Kubernetes become more established, then we are likely to see the software industry behave more like an economy of scale.

> Even people with basic programming knowledge will be able to create powerful, highly scalable enterprise-quality apps which scale to millions of users out of the box.

I must disagree. The real problem with scalability is that any system that scales enough must become distributed, and distributed systems are obnoxiously difficult to reason about, and as such remain difficult to program and to verify.

Talk to me about Docker and Swarm and the like hosting technology platforms and frameworks that make it trivially straightforward to program distributed systems reliably, and really hard to program them wrong, and we might have the utopia you speak of.


Powerful abstractions make the promise that you'll only have to learn the abstraction to write powerful software, so "even people with only basic knowledge will be able to do X using abstraction Y".

The promise is almost always false. All abstractions are leaky, and if you do serious development with them inevitably bugs will bubble up from below and you'll have to dive into the messy internals.

For example, ZeroMQ makes distributed messaging relatively painless. Someone with very little knowlegde of the network stack can write simple programs with it easily. But for any serious enterprise application with high reliability requirements you'll eventually run into problems that require deep knowledge of the network stack.

>Talk to me about Docker and Swarm and the like hosting technology platforms and frameworks that make it trivially straightforward to program distributed systems reliably, and really hard to program them wrong, and we might have the utopia you speak of.

Yes, that.


Your argument definitely applies to Backend-as-a-Service kinds of software and I agree 100%, but the nice thing about the Docker/Kubernetes combo is that it gives you an abstraction but it does so in a way that doesn't prevent you from tinkering with the internals.

The only downside I can think of is that tinkering with those internals can become trickier in some cases (because now you have to understand how the container and orchestration layer works). But if you pick the right abstraction as the base for your project, then you may never have to think about the container and orchestration layer.


Maybe I was unclear, what I meant is that usually you need to tinker with the internals at some point. Which is fine, but it does mean you need more than basic knowledge to use the tool productively. (And if the software is proprietary and poorly documented, you're SOL).

The lie is that this tool is so easy, you just have to read this 30 minute tutorial and you'll be able to write powerflu software and you don't even need to learn the internal mechanics of it.

I havn't used Kubernetes, it's possible it's so good that you don't need the learn the messy details, I'm just sceptical of that claim in general.


Your last line pretty much describes exactly what I think the next phase will be in the container/orchestration movement. The problem with 'Docker and friends' at the moment is that they are disjointed general-purpose pieces (highly decoupled from any sort of business logic) - To make anything meaningful with them, you have to do a lot of assembling (and configuration).

I was a Docker skeptic before I stumbled across Rancher http://rancher.com/. In Rancher, you have the concept of a 'Catalog' and in this catalog, you have some services like Redis which you can deploy at scale through a simple UI with only a few clicks.

I think that this concept can be taken further; that we can deploy entire stacks/boilerplates at scale using a few clicks (or running a few commands). The hard part is designing/customizing those stacks/boilerplates to run and scale automatically on a specific orchestration infrastructure. It's 100% possible, I'm in the process of making some boilerplates for my project http://socketcluster.io/ as in my case, but you do have to have deep understanding of both the specific software stack/frameworks that you're dealing with and the orchestration software that you're setting it up for (and that's quite time-consuming).

But once the boilerplate is setup and you expose the right APIs/hooks to an outside developer, it should be foolproof for them - All the complexity of scalability is in the base boilerplate.


This presupposes that Docker and friends are the answer. I have become increasingly sceptical of that. :(

This also ignores much of the physical advantage that existing powerhouses have. Amazon, as an example, will be difficult to compete with, not because AWS the software is so amazing, but because the data centers that house AWS are critical. Google and others have similar advantages.


I agree wholly with your first paragraph, but disagree entirely with your last one. Look at every industry with economies of scale today. Do any of them make it easy for newcomers to compete? At best economies of scale are entirely uncorrelated with number of players. I suspect it's worse than that: economies of scale allow large players to crowd out small ones. That will happen in software as well once we figure out how to systematize our learning (http://www.ribbonfarm.com/2012/10/15/economies-of-scale-econ...)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: