Hacker News new | past | comments | ask | show | jobs | submit login
There's Just No Getting Around It: You're Building a Distributed System (2013) (acm.org)
147 points by federicoponzi on April 27, 2017 | hide | past | favorite | 44 comments



> There are many reasons why an organization would need to build a distributed system, but here are two examples:

> - The demands of a consumer Web site/API or multitenant enterprise application simply exceed the computing capacity of any one machine.

> - An enterprise moves an existing application, such as a three-tier system, onto a cloud service provider in order to save on hardware/data-center costs.

When you've exhausted the capacity of a single machine, typically you don't jump straight to a distributed system. You can scale out horizontally with a stateless application layer, as long as the data storage on the backend can handle all the load. You can also scale database reads horizontally, using read replicas.

This horizontal scale-out is not a distributed system, since consensus ("source of truth") still lives in one machine.

So I think a better phrasing of #1 would be "When your write patterns exceed the computing capacity of any one machine".


That's... still a distributed system with all the joys of eventual consistency.


However, I do think the general consensus is that stateless/shared-nothing app servers and following something like the 12 Factor App[1] pattern makes things much easier.

1. https://12factor.net


Yes, it does.

But just because your storage layer is abstracted behind an API doesn't mean eventual consistency isn't your concern.

Take S3 for example.

LIST calls lag deletes but exists() doesn't, so exists() could return 404 for a document that's still included in the result of LIST. Plus, all exists() calls go through a caching load-balancer, so you could receive different answers in two subsequent exists() calls.

That's just a small subset. There's a good comment here[1] on S3's consistency semantics.

Point is, 12FA makes things easier but any time you have more than one computer involved (i.e, always), consistency becomes complicated.

[1] https://issues.apache.org/jira/browse/FLINK-5706


Seems like when cache needs to get involved, things can get complicated very quickly.


One of the two hard problems...


I don't really see how your example applies to the point the root comment made. S3 surely _is_ a distributed system, but he is specifically talking about data residing on _one_ machine (notwithstanding the bits about read only followers).


Only if the db replicas allow stale reads.


A replica always lags master by some degree. It may be small—until things go sideways—but it's still another form of eventual consistency.


You're saying ACID databases are eventually consistent?


They can be. Multi-master DB clusters, unless they're built on consensus (and boy how slow that would be), usually are.

Read-replicas, though, don't have much to do with that; a read-replica is essentially a point-in-time snapshot of the DB, that's going to get replaced with an updated snapshot every few milliseconds as WAL-log events stream in. That isn't "eventually consistent"; it's just "inconsistent." You're always looking at the past when you're talking to a read-replica.

If you need to update values using a client-side calculation, you'll need to do all that in a single transaction to get ACID guarantees. Since a transaction can only be "against" one node, and since the transaction will contain a write, that transaction (including both its reads and its writes) is forced to happen against a master node. A read-replica just isn't involved in the writing "story."


I'm saying read replicas are not instantly in sync with their masters. Updates are asynchronously copied to the replica.


Or, you know, use synchronous replication, or semi-synchronous, or a different asynchronous model that isn't weakly consistent such as distributed transaction logs, write-read consensus, complicated locking mechanisms, etc


All of these solutions are still battling the fact that distributed systems are hard and there's a tradeoff to every decision.

If you use synchronous replication, you now require at least two systems to commit a write, so you've increased the chance that your commit will fail. Maybe that's the right choice—but it's a tradeoff and the decision ultimately rests in your business requirements.

My point is simply that there is no magic bullet in ensuring that all relevant systems have the same answer to the same question at the same time.


Especially on the load balancer side...


When you have digged a tad in the hell of the clocks/timer of a x86, it is very easy to saturate your capacity of having reliable timestamping

Modern computers are not only CPU bound, they are also interrupt bound, and sadly very limited in "available reliable timestamp" capacity. And controlling which clock you use is hellish.

And if the availability of a monotonic raw clock was not enough, I feel bad that in 2017 we still rely on 8254.

Documentation for having a reliable clock on every OSes is close to 0.

Without reliable clocks, without reliable synchronisation, we are anyway unable to provide "good" distributed systems at low costs (yes GPS is a solution that can be easily hijacked).

x86 has won because of legacy supports, but the legacy support means we are stuck with outdated components


There's also old fashioned performance improvements. Most enterprise software has plenty of room for improvement there. I've noticed a correlation between people that push for distributed systems and ones that don't know how things like indexes and transactions work or the need to avoid n+1 queries.

I'm sure some apps need to be distributed systems, but I bet it's a tiny minority.


Given that HN runs on a single server [ https://news.ycombinator.com/item?id=9222006 ], I'd be very willing to take your side of the bet.


Yes, that's exactly my point. This article makes it sound like you need a distributed system at almost any level of scale. Yet the reality is, you only need one of you are doing tons of writes. Most systems are read-heavy, which is why most people scale reads horizontally instead of using a distributed system.


More or less agree with your point, just wanted to point out that scaling reads horizontally usually means adding a distributed quality to your system, and not understanding how that impacts the system can have bad side effects


Where do you draw the line of where a horizontally scalable system begins? Scaling reads might only involve a caching server, but most of us probably wouldn't consider an app server with a cache to be distributed.


there's also availability, especially for enterprises. so redundant frontend and master/slave db at least. may or may be not distributed, according to one's personal definition of distributed.


People who actually know how to build distributed systems would understand transactions and indexes.


Yes, computers today are fast and "distributed" computing has massive overheads. With some engineering effort you can save many machines. I like this paper about this topic a lot: http://www.frankmcsherry.org/assets/COST.pdf


AFAIK that's not the definition of the rest of us are using for a distributed system.

If anything, considering the fat client trend we're in, I'd go the other direction and claim a single web server backend with multiple clients (browsers) is a distributed system.


This horizontal scale-out is not a distributed system, since consensus ("source of truth") still lives in one machine.

I think this is a key idea, though my experience with it only comes from writing game servers. Things are much simpler when there is a single "source of truth" per user. In my system, there is an "active" state maintained in an "instance" where the user is rapidly updated in memory, and a "persisted" state stored to the database when a user joins an instance. This keeps the frequency of database writes down. (And enables many users to be able to interact with each other in realtime.)


Regarding testing distributed systems. Chaos Monkey, like they mention, is awesome, and I also highly recommend getting Kyle to run Jepsen tests. But we still need more tools on this front, so we built https://github.com/gundb/panic-server which integrates with Mocha (and other unit test frameworks) to make it easy to run failure scenarios across real and virtual machines. It has been a life saver for me.


I've been keen to investigate Peter Alvaro's work around fault injection. He's been working on that with Netflix[1].

The tl;dr is that you analyze successful system outcomes and inject failures along those paths to see if they subsequently fail. If they do, you've found a bug. It's the next generation of Chaos Monkey.

[1] http://techblog.netflix.com/2016/01/automated-failure-testin...


Any time you have something talking to a database, cache, or pretty much anything else you have a distributed system, you just don't know it yet.


> Geographies. Will this system be global, or will it run in "silos" per region?

Although I've only worked at Amazon for about a year, I've learned that you should always consider building siloed/regionalized applications—if not, expect major headaches when the service needs to be deployed in multiple environments.


I'm going to join ACM just to see what happens. Who even is in this thing?


I joined ACM several years ago. It took filing charge backs with my credit card company to cancel my membership. Even after canceling, I still get non CAN-SPAM compliant emails from them from which I've found it very difficult to unsubscribe.

If you're interested in joining a professional society for computer scientists, software engineers, and electrical engineers that doesn't resort to dark patterns, I'd recommend checking out IEEE. I don't get spam from them and they respected my decision to cancel my membership. I generally find their digital library to be of higher quality as well.

My two cents. YMMV.


I found it rather appalling that the IEEE sent me junkmail disguised as a bill—using an IEEE envelope no less. "Payment required" it says on the outside. Open it up, and it turns out they're just trying to sell you insurance.


Thanks for the warning. I was serious about joining, so I'm still going to give it a try. I'll use a disposable credit card.


So .. I'm a professional scientist. ACM isn't a fly-by-night operation. It is very similar to IEEE in terms of stature. In CS, I consider ACM conferences slightly higher tier but this is not a hard and fast rule.

Yes .. they do send a lot of material I don't care for. But if you are a student, it is definitely cheap and worthwhile to join. As a professional, you have to make your own call .. I pay for it because I feel the money goes to conferences, student subsidies, etc. (not really charity but I feel like I am paying it forward). You get discounts at conferences but my employer pays for those anyways.


Also agree. Plus, if you sign up for the ACM Digital Library, you get online access to a tremendous library of papers, publications, conference proceedings, etc.


Agreed on all points.


For what it's worth, I didn't have any issues when canceling my membership. I did get some emails for a while, but those tapered off or were easy enough to unsubscribe from (link in the emails). Regarding auto-charging the CC, just don't let them. There's an option to store your CC credentials and automatically renew (I believe the default was to not store and to not auto-renew), you can either not store them, or not automatically renew, or both. I received a notice in September of every year to renew my membership, with a deadline of October or November. That was a bit nagging, but not terrible.

I did find it difficult to unregister from ACM SIGs (Special Interest Groups), had to press on through several screens to get to one that let me unselect them. That was obnoxious, but only resulted in my being a member SIGGRAPH one year longer than I desired and being out $20 or so, interesting reads but wasn't an active pursuit or career-wise applicable topic anymore.


I've been a member of the ACM for many years and they have consistently impressed me by the professional quality of their publications. They take computer science, and especially computer science education, very seriously. I recommend them to all my professional friends.


I take ACM queue magazines and chill out in coffee shops during days I don't feel like working. Always fills me with hope and confidence.


I am a member, and have been since I was a graduate student. Some of the best computer science academic publications are in ACM conferences.


This is a verbose paper written by a tech bro that over-generalizes "How To Build A Scaling High Availability Web App", but fails to explain how such systems are designed in general. If you're a tech startup and have never worked in this industry before, the very last paragraph is useful.

My personal ideal system design is one that you can pick up and drop into a single machine, a LAN, a cloud network, geographically dispersed colocated datacenters, etc without relying on 3rd party service providers. If you go from a start-up to a billion dollar company, you will eventually have offices with their own labs, dev, qa, middleware and ops teams, datacenters and production facilities, and your hardware and software service providers will run the gamut. If you can abstract the individual components of your system so that dependencies can be replaced live without any changes to any other part of the system, you have the start of a decent design.

However, nobody I've ever worked for designed their system this way initially, and they made millions to billions of dollars, so there certainly is no requirement that you have a perfect distributed system design for your emoji app start-up.


"Mark Cavage is a software engineer at Joyent, where he works primarily on distributed-systems software and maintains several open source toolkits, such as ldapjs and restify. He was previously a senior software engineer with Amazon Web Services, where he led the Identity and Access Management team."

What makes him a tech "bro", exactly?


You mean the Oracle VP of Engineering Mark Cavage, formerly of Joyent? Yeah, what a hack, what has he accomplished? Better yet, what have you accomplished? I eagerly await your response.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: