Hacker News new | past | comments | ask | show | jobs | submit login

Can someone enlighten me as to why 6000 tweets a second is something to make a big deal about? At 140 characters per message that comes out to 840,000 bytes/s < 1 Megabytes per second. In 2014 is a service that can handle 1 Megabytes/s impressive?



A single tweet is closer to 2500 bytes on average in most of the data feeds. However, your point still remains that this a "human-scale" data model that is relatively small and with a low data rate. It is pretty easy to engineer systems that can keep up with this even if you are not an expert in designing real-time database systems.

By comparison, many complex machine-generated data sources (e.g. real-time entity tracking) that are sometimes fused with the Twitter firehose operate at millions of complex records every second (often tens of gigabytes per second) that need to be processed, indexed, and analyzed in real-time. You can't deal with this kind of data model using something like Twitter's current architecture because the several order of magnitude difference in velocity and volume exposes the limitations of most database platform designs people typically use.


I think the issue is less with the number of tweets per second (not very impressive as you suggest), and instead the requirement that those same 6000 tweets fan out to all of the various feeds/locations that they need to show up at and in a timely manner. If it was a single page with those 6000 tweets/sec it would as you say be entirely unimpressive.


It's not. Figuring out which of the 240 million accounts they should be sent to is.


I spent a little while trying to implement a twitter back end for fun during their fail whale period and it's really just a data structure problem. (Edit: Figuring out which of the 240 million accounts they should be sent to.) You can literally get the basic freed process running 10k tweets per second with 100 million accounts on a single PC and a 1Gbit Ethernet connection where the Ethernet connection is the bottleneck. Fanning that out to various places to listen in is a fairly basic scaling issue.

Not that I took it vary far, but my first stab to see what the scaling issues would be was actually plenty fast to run there feed process at the time.

PS: The 'trick' is to keep two lists one everything a user follows, and another is everything that follows each account. For showing the messages to someone when they log in you keep the last 10 message Id's with timestamp or sequence ID so if someone follows 5k accounts you can avoid looking at the vast majority of those messages then sign them up the device to listen to new messages. (Sure, sometimes you will need to look past that top 10 but it's rather effective.)

(And for the downvoters I can upload some code if you want to see it.)


Ah, hubris. Try simulating some celebrities talking to one another. 15M followers (Ashton Kutcher) / 10K fanouts per second = 25 minutes to deliver one tweet.

That's the difference between "getting the basic process running" and operating at scale.


Ehh, first off the 2009 twitter was vary different than the 2014 beast. For some context: http://computer.howstuffworks.com/internet/social-networking...

Back when they where growing from 500,000 users to 7 million total users they where having major issues and that's when I was looking into things.

Anyway, not I was suggesting 1GB would be fine today. Still, I was saturating a 1GB connection so 15M (over twice the total users back then) * ~200bytes * 8 / ~1000^3 = ~24 seconds did you have an extra 60x multiplier in there somewhere?


The basic twitter system is easy enough. Now store all that data forever, run ads, collect ad stats, collect server stats, collect service stats, do authentication for API calls, etc. etc.

Things add up eventually. Suddenly your 6000 tweets/s has turned into a million op/s fanout.

Problems become a lot less straightforward over network links too.


Thanks for the offer. I would enjoy looking at the code.


How did you simulate 100 million interconnected accounts? Did you just randomize it? How many people does the average twitter account follow?


I don't know what it really looked like but I randomly had 99% of the accounts follow 100 people and 1% follow 5,000. I don't know how many the average account followed but apparently when a single user followed a few thousand it was giving them issues if the user rapidly refreshed the page.

PS: I did not keep old messages just there ID because that was not going to fit in RAM. My assumption was using Redis or other key value store would be fine what they needed was an internal index so you would only need to look up messages that would be displayed.

Note: there current setup once they worked the bugs out handled a peak of over 100,000 tweets per second in 2013. https://blog.twitter.com/2013/new-tweets-per-second-record-a... Which is well beyond the target I was shooting for.


Google scaling twitter to see the actual requirements of their system.


I'd love to see the code.


+1 for seeing this code!


The word "handle" is doing a lot of work there. It's a collation engine.


We store more than just tweets :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: