Hacker News new | comments | show | ask | jobs | submit login

BUT, that's via the firehose, which doesn't include direct messages or twitterers who've locked their feeds. The number also doesn't reflect what has to happen with each of those 200.

Sure it's not impressive if all they have to do is append a 140 char message to a flat file or DB table. It's all the other manipulation that's interesting.

What kind of 'manipulation'? They're basically routing messages, with a protocol change in the middle when the recipients' designated protocol differs (i.e. IM to text message). This can't be all that computationally expensive when the system is engineered right.

Knowing how little activity twitter has really makes them look incompetent in light of the service outages they experienced over the last year. Either Ruby on Rails is really really non-performant or the twitter code-monkies got the system architecture wrong.

The correct metric is probably not "tweets incoming", but "tweets delivered". There's some multiplicative factors that start to bite.

I say "probably" because I'm an outsider guessing, but I'm reasonably confident about that statement. Just not quite 100%.

(In the colloquial sense of 100%, of course; I'm not mathematically 100% sure Twitter exists.)

For starters, there's a few interesting links at http://highscalability.com/scaling-twitter-making-twitter-10...

summary: "Rails and Ruby haven’t been stumbling blocks... The performance boosts associated with a “faster” language would give us a 10-20% improvement, but thanks to architectural changes that Ruby and Rails happily accommodated, Twitter is 10000% faster than it was in January"

They're dealing with duplication, consistency, searchability, etc. across distributed storage systems and a variety of service mechanisms (more than just protocols). The 1 000 000 follower user has every message duplicated and cached at multiple layers up to 1000000 times. The message can show up on the web, API, or across any of the protocols, and it has to persist.

They're not incompetent code-monkies, they just guessed way low when they designed the architecture. It looks like the big move was from a CMS model (not at all unreasonable for a "microblogging" service) to a messaging model. In hindsight, targeting messaging to begin with would've saved them some down time, but wouldn't have been practical in the short term.

From http://gojko.net/2009/03/16/qcon-london-2009-upgrading-twitt... it sounds like the insertion number averages around 9600 messages per second. That's avg follower count * avg tweets inbound, and that's only the input.

Edit: real, solid numbers are hard to come by. I suspect it's proprietary.

What you say is echoed by, Every incorrect assumption in this post seems to think that 1 tweet on twitter = 1 database row = "So easy!". You've left out the user fanout! One Obama Tweet = 1M database rows, someplace

Assuming that is true, which I have trouble believing it is, it sounds like they need some help with normalization. I understand the tradeoffs, but it just seems crazy.

Select tweet.* from tweets inner join tweeters on tweeters.id = tweets.tweeterid inner join followers on followers.followerid=loggedinuserid and followers.followee=tweet.tweeterid

Yeah, okay, I know there are performance problems with joins, but there are performance problems with 1,000,000 inserts as well and you could cache the list of followees and do an "in" statement such as: select * from tweets where tweeterid in (cached_comma_separated_list_of_followee_ids)

or whatever to improve select performance.

Thanks for the links. This slideshow was also useful to me as it has a couple conceptual diagrams of what Twitter is doing. http://www.slideshare.net/Blaine/scaling-twitter

I really really doubt that "all they have to do is append" 140 chars" to a table.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact