Then there's the wise lessons on general topics, like the idea that you should "wait until your site grows so you can learn where your scaling problems are going to be". I'm pretty sure we know what your scaling problems are going to be. Every single resource in your platform and the way they are used will eventually pose a scaling problem. Wait until they become a problem, or plan for them to become a problem?
I'm not that crazy. It really doesn't take a lot of time to plan ahead. Just think about what you have, take an hour or two and come up with some potential problems. Then sort the problems based on most-imminent most-horrible factors and make a roadmap to fix them. I know nobody likes to take time to reflect before they start banging away, but consider architectural engineering. Without careful planning, the whole building may fall apart. (Granted, nobody's going to die when your site falls apart, but it's a good mindset to be in)
Let me tell you a story: in 1998 at Inktomi (look it up) we had a distributed search engine running on Sun hardware. We could not have anticipated that we'd need to migrate to Linux/PC because Sun's prices would make it impractical for us to continue scaling using their hardware. It took us three years to make the switch, and that's one of the reasons we lost to Google. Had we started two years later (when Gigabit ethernet became available for no-brand PC hardware), then we would have built the entire thing on Linux to begin with.
"It really doesn't take a lot of time to plan ahead."
Have you ever experienced the growth of a startup to see your infrastructure cost soar to five, six, seven figures per month? Two hours will get you as far as "one day we'll probably need to replace MySQL by something else." What you don't know is what that something else will be. Too many writes per second? Reads? Need for geographical distribution? A schema that changes all the time because you need features and fields you never imagined? Will you need to break up your store into a service-oriented architecture with different types of components? Will you run your own datacenter, or will you be on AWS? What will be the maturity level of different pieces of software in two years?
I hope you get the point.
I assume by no-brand you mean custom built, and also assuming you mean the cheapest available, in which case even one gigabit interface may have been difficult, seeing as 32-bit 33mhz bus capacity is barely above gigabit speed. In any case, the money you saved on Sun gear could have built you a sizeable PC cluster and even with several 100Mbit interfaces would have been more powerful and cheaper. Really I think it wasn't built on Linux because Sun was the more robust, stable platform. But I could be crazy.
While i'm being crazy, I should point out all the other things you mentioned can be planned for. Anybody who's even read about high-performance systems design should be able to account for too many reads/writes! Geographical distribution is simple math: at some point, there are too many upstream clients for one downstream server, capacity fills, latency goes through the roof. A DBA knows all about schema woes. I thought service-oriented architecture was basic CS stuff? (I don't know, I never went to school) AWS didn't exist at the time. And the maturity level of your software in two years will, obviously, be two years more mature.
All of these problems are what someone with no experience or training will run into. But there should be enough material out there now that anyone can read up enough to account for all these operational and design problems, and more. But if your argument is that start-up people shouldn't have to know it because hey, I haven't got time to figure out how to do it right because I have to ship right now, I don't buy that for a minute.
There's a paper that some guy wrote years ago that goes over in great detail every single operational scaling issue you can think of, and it's free. I don't remember where it is. But it should be required reading for anyone who ever works on a network of more than two servers.
As an aside: was it really cost that prohibited you from porting to Linux? This article from 2000 states "adding Linux to the Traffic Server's already impressive phalanx of operating systems, including Solaris, Windows 2000, HP-UX, DEC, Irix and others, shows that Inktomi is dedicated to open-source standards, similar to the way IBM Corp. has readily embraced the technology for its eServers." And this HN thread has a guy claiming that in 1996 "we used Intel because with Sun servers you paid an extreme markup for unnecessary reliability". However, it did take him 4 years to move to Linux. (?!) A lot of other interesting comments on that thread.
It is also extremely non-trivial to replace your entire network fabric alongside new serving hardware and a new OS platform. These are not independent web servers, these are clustered systems which all speak peer to peer during the process of serving a search result. This is very different from running a few thousand web servers.
Even once migrated to gigE and linux, I watched the network topology evolve several times as the serving footprint doubled and doubled.
I assure you, there is no single collection of "every single operational scaling issue you can think of," because some systems have very different architectures and scale demands -- often driven by costs unique to their situation.
Was the app myrinet-specific? If so, I can understand increased difficulty in porting. But at the same time, in 1999 and 2000, people were already building real-time clusters on Linux Intel boxes with Myrinet. (I still don't know exactly what time his post was referencing) If Diego's point was that they didn't move to Linux because Gigabit wasn't cheap enough yet, why did they stick with the expensive Sun/Myrinet gear, when they could have used PC/Myrinet for cheaper? I must be missing something.
I can imagine your topology changing as you changed your algorithms or grew your infrastructure to work around the massive load. I think that's natural. My point was simply that making an attempt to understand your limitations and anticipate growth is completely within the realm of possibility. This doesn't sound unrealistic to me [based on working with HPC and web farms].
What I meant to say was "every single issue", as in, individual problems of scale, assumptions made about them, and how they affect your systems and end users. It's a broad paper that generically covers all the basic "pain points" of scaling both a network and the systems on it. You're going to have specific concerns not listed, but it points out all the categories you should look at. I believe it even went into datacenter design...
> And the maturity level of your software in two years will, obviously, be two years more mature.
That's a pretty Reddit thing to do..
And at the same time they use and praise Postgres a lot, so it cannot be about NoSQL.
I am wondering what they mean exactly. From my own tendency, it should mean use a few very big and narrow tables in the form of "who - do - what - when - where", eg "userA - vote up - comment1 - timestamp - foosubreddit", and also "userB - posted - link1 - timestamp - barsubreddit"
Then in the same table you get kinda all events happening in the site, and you are somewhat schemaless, in the sense that adding a new functionality do not require schema change.
If someone with inner insight can confirm this is not too far from what reddit team meant, I'd appreciate.
We had a basic schema that basically made postgres into a K/V store. So we had both.
^^ They probably have a basic schema, the rest is in KV store (guess)
Can you get into more details about how this is used? If reddit needs to display a page that has 100 comments, do they query Cassandra on the voting status of the user on those 100 comments?
I thought Cassandra was pretty slow in reads (slower than postgres) so how does using Cassandra make it fast here?
Sort of, yeah. There are two versions of this, the old way and the new way.
Old way: keep a cache of the last time they voted. Remove any comments from those 100 that are younger than the last vote (since they can't possibly have voted on them). Then look up the keys containing the remaining ones. Most of them hit the bloom filter, those that pass the bloom filter actually get looked up. In the worst case this is all 100 comments, which can hit up up to 1/RF of your Cassandra nodes. The worst case doesn't happen often.
The new way is a little different, you have one Cassandra row (so only one machine) containing all of the votes for the user (perhaps further limited to a given link ID or date). You hit that one node for all 100 comments. If you have per-row bloom filters, see Old Way for the rest.
> I thought Cassandra was pretty slow in reads (slower than postgres) so how does using Cassandra make it fast here?
"Fast" and "slow" as used here is very naive, performance is usually more complicated than simple statements like this. I guess if you had 1 postgres node with 1 key and one Cassandra node with 1 key, and only 1 client, you could answer simple generalisations like this. But reddit has thousands of concurrent hits spread over hundreds of servers with varying amounts of RAM, I/O performance, network bottlenecks, and usage profiles.
The single biggest win is that you can easily horizontally scale Cassandra. Just add more nodes until it's "fast". But even that's a gross simplification.
For another example, if you scale Postgres by adding a bunch of replicas and choose a random one to read from for a given key, then they all have the same data in RAM, so your block cache miss rate is very high (that is, your effective block cache is the amount of RAM in one machine). Additionally, your write performance is capped to the write performance of your one master. Your replication throughput is capped to his outbound network bandwidth.
So you want a subset of the data on all of them so that whichever machine you ask has a very high likelihood of having that data in block cache. So you shard it by some function. But then you want to add more postgres machines, you have to migrate the data somehow, without shutting down the site. You've now more or less written Cassandra.
More like you've now more or less written a tiny piece of postgres-xc: http://wiki.postgresql.org/wiki/Postgres-XC
(Think of it more as somebody's personal notes about how things work and not an exclusive source of breaking news or architecture revelations.)
Aggregation and filtering is value, too. Like HN, it's a channel with the expectation of a certain type of content. I can't possibly discover good tech talks (or any other content) entirely on my own.
This is the lowest of low hanging fruit. Many people don't realize it but a ton of huge media sites use Akamai to offload most of their "read-only" traffic.
This isn't quite right. It was web.py at the beginning. They have started using Pylons after Conde Nast acquisition.
It would also be interesting to know the versions and any backstory. My guess is that none of this info exists because it, like most things done in a rush/under pressure, was probably attempted, didn't work right away and then tossed.
I've yet to find schema changes limiting in my ability to code against a DB (and I use MySQL, which is one of the most limiting in this regard). Plus, I appreciate the ability to offload things like data consistancy and relationships to the database. I understand, however, where others might not feel the same way.
The lesson is: match your database to your use-case, not the other way around. Need advanced querying/reporting options? Get a warm, fuzzy feeling from a SQL prompt? Use MySQL. Want a plain jane key-value store? Use Voldemort/Kyoto Cabinet. Want flexible schemas? Use MongoDB. Want a Key-value store with secondary indexes and lots of scaling capabilities? Use Cassandra/HBase. Want a powerful datastore that's supported by a BigCo? Use DynamoDB or Cloud Datastore.
Three years ago at IndexTank we were looking for a SimpleDB replacement because it just didn't work as advertised. We explored a bunch of options, and we paid a significant cost to find out that deploying Cassandra would not be worth it for us. If you have never used Cassandra and choose it because you "want a Key-value store with secondary indexes and lots of scaling capabilities" then you're in for a world of hurt.
If MySQL works for your use-case, and it's the option you're the most familiar with, use it. You'd be doing yourself a disservice by not at least evaluating other options though.
And Cassandra is a key-value store with secondary indexes and lots of scaling capabilities (such as multi-datacenter deployments, multi-master replication deployment), and some companies who aren't Google or Facebook do need these things. It sounds to me like IndexTank wasn't one of those companies.
I reiterate my point: choose what's best for your company, and don't settle at MySQL just because it's "good enough".
 I'd also add that it's particularly suited for large, insert-heavy datasets.
MySQL is a high performing, highly scalable, ACID compliant relational database, when configured correctly.
The "MySQL is not production ready" meme was perpetuated by some well meaning, if ill-informed, fans of other RDBMS platforms.
You forgot the "with tons of brokenness, misfeatures, mistakes, problems and otherwise NOTABUG bugs that will never be fixed and cause immense amounts of pain". Literally every other RDBMS is a better option, thus there is no reason to use mysql.
It's particularly not going to convince people when it's so widely used (from Wordpress installations to Facebook), and perhaps more importantly when it's offered as part of the two largest VPS providers.
On topic, I'd be happy to offer some advice on how to set up MySQL in a way that limits (or eliminates) the concerns proffered by most "MySQL is not Production Ready" comments... the two most oft cited problems being sorted by the following two my.cnf settings:
You can avoid buffer overflows in C by using a library that's got safe strings.
Does that make C safe? Nope.
The Windows NT architecture has an enormously rich security mechanism that can allow arbitrarily granular security statements to be made about almost everything. But the default policy until Windows 7 was "pretend you're Windows 95".
Did that make Windows more secure than Unix? Nope.
The baseline is what counts.
Of course, so is Oracle, SQL Server, and every other database known to man.
You have to tailor the configuration of any database server to meet your needs. MySQL is no different in this regard.
MySQL is different in this regard.
I like to consider the problem, before I recommend a solution generally ( I don't mean to accuse you of pushing a solution, I think you are offering help which is always appreciated ), but I think a lot of people are used to having had the choice already made ( and indeed, in some circumstances it is!).
One thing I always try to remember about mysql, as it is is less than intuitive to me, at least that there is no way I am aware of to alter or restrict this behavior directly, is that in mysql the client is allowed to alter the sql-mode ( I do think I have used proxies to filter out this behavior as a sort of guardian, but that was not an ideal fix by any means ), generally if you don't have control of your clients ( or also hopefully some good layers in front ) in the RDBMS world you are already sunk, but this has been more as a guard against accidental breakage for instance.
This can make it unsuitable for certain situations ( where you may not have control of the client ).
One thing that I think is both a strength, and a weakness ( again depending upon the situation ) is that mysql is very flexible and can be deployed in so many different configurations.
Generally I think it is best for people to carefully consider their situation and needs ( and be prepared to change when the situation does!).
I really enjoy working with Postgresql as well, and have long respected the code produced by that project.
In summary, I'd say there are many great databases ( both relational and otherwise!) which can be a real asset to solving problems. The best thing I think is to learn directly and continually :)
There are no settings for "make triggers actually work", or "remove arbitrary limitations like being unable to update a table referenced in a subquery", or "make views with aggregates perform well enough to be used", or to add expression indexes or check constraints or window functions or let you set defaults to functions or to have a transactional DDL or to make rollbacks not corrupt the database or to allow prepare/execute in procedures or to allow recursion in procedures or to allow triggers to modify the table they were fired against. That's the point, mysql is full of crippling limitations. There are non-broken databases available that are superior in every single way. Thus there is no reason to use mysql. I know quite a lot more about mysql than you seem to think I do, and there is a reason that the only thing I do with mysql is conversions from mysql to an appropriate database that actually works.
The "Don't use MySQL" argument smells like the "don't use bcrypt" argument to me. You're letting the perfect be the enemy of the good, for 95% of the usecases where you're doing something dumb like using MongoDB or homebrewing something, MySQL is a better choice--even if it isn't often the best choice.
Both technologies have their place and reasons to exist, but it's not solely for the ability to scale.
Facebook uses MySQL, largely, as a key-value store.
If you'd like another example at slightly less than Facebook size - RightNow (recently acquired by Oracle). They manage customer service for many (1,000+) different clients at huge scale: more than 300 MySQL databases spread throughout the world. If a website has a knowledge base, there's a good chance it's being managed on the backend by RightNow.
What does reddit use for queuing?
Before that at Netflix we developed a service that would hand out temporary keys to the requestor when they presented a proper certificate.
At reddit we put the secret keys on the instance, which was bad. :)
Redis does offer AOS for durability, but it's not nearly as mature as PostgreSQL (and comes with all sorts of caveats).
Like anything else in engineering, it's all a series of trade-offs to figure out what fits your needs the best.
We store arbitrary JSON in some fields, and even build indexes on that data. It's been remarkably fast so far, and other than a few small gotchas relating to type conversions and indexing, is really easy to use.
So far, everything's working as well as I'd have expected from something released by the PostgreSQL community.
CREATE TABLE kvstore (
key VARCHAR(128) NOT NULL UNIQUE,
value VARCHAR(128) NOT NULL
And, it has good performance when compared to MongoDB: http://thebuild.com/presentations/pg-as-nosql-pgday-fosdem-2...
So in this case, it is using Postgres as intended by the datatype designers.
May be you are thinking of the Ruby Rails ORMs.
> Users connect to a web tier which talks to an application tier.
So, I'm assuming the web tier is nginx/haproxy and the application tier is Pylons.
Are the 240 servers mentioned all running both the web tier and the app tier?
Seperating it at that point allows the web tier to offload much of the work (mostly rendering), while keeping it stateless, thus allowing effortless scaling of that tier.
From a security standpoint, this sounds like a bad idea
That said, reddit uses a mix of straight-C and Cython-ised Python, which is a bit like the best of both worlds.
My own projects are not yet large enough to have this cause an issue, but I can see where something the size of Reddit would indeed have issues that even the most aggressive caching can't resolve.