

Zynga Moves 1 Petabyte Of Data Daily; Adds 1,000 Servers A Week - sahillavingia
http://techcrunch.com/2010/09/22/zynga-moves-1-petabyte-of-data-daily-adds-1000-servers-a-week/

======
dpcan
Here's how this adds up to me:

\- 215M users.

\- Up to 1000 servers per week.

\- Say, they've been doing this for 30 weeks.

\- So they need 1 server for every 7167 users? Is this normal for mmorpg type
games?

\- Are 215M people playing around the clock?

I'm just a little baffled by this number.

~~~
_delirium
7000 users/server is actually pretty high for an MMO, though I don't know how
Zynga's MMOs compare to something like World of Warcraft in terms of server
resources needed. The numbers being floated around for World of Warcraft are
around 75,000 cores for 10m subscribers, or about 133 subscribers per core.
Obviously those aren't single-core servers, but it's still under 1k per
server. Of course this and the Zynga numbers are also both subscriber numbers,
not peak player numbers, and I'm not sure how play frequency/duration differs
between the two companies' userbases.

Some MMO designers actually really want to bring that number even lower,
though lots of other people at game companies are looking at ways to pack more
users on each core. But from a designer's perspective, if you typically get
less than 1/50 of a CPU slice per user, maybe 1/100, and much of that is taken
up by doing basic bookkeeping, it's hard to put in things that even 1990s RPGs
took for granted, like NPCs with at least passably interesting AI.

~~~
sbov
Just to emphasize your point in the last paragraph: comparing the game
mechanics in Zynga games to those in traditional MMOs is laughable.

Zynga games are more comparable to social websites than WoW.

~~~
crystalis
Could you point out a WoW mechanic that is actually taxing? From what I
understand, WoW AI is a height agnostic beeline and battles are heavily
scripted at best. The tricky part seems to be the real-time player interaction
a.k.a 3d chatroom, which has nothing to do with game mechanics (e.g., IMVU).

~~~
sbov
Many standard rpg mechanics in a large, seamless, 3d world full of both NPCs
and players can tax resources.

Movement is one already mentioned, but in general, AI isn't only turned on in
combat. NPCs move outside of combat - most creatures in the world wander about
or follow a pre-defined path. I don't recall if they did on WoW, but I know in
Everquest creatures used spells of their class both in and out of combat, such
as buffs and heals - a creature in Everquest would even heal and buff up their
allies. If monsters are buffing, you now are tracking those buffs & their
durations - when we added monsters buffing to our multiplayer game, we made
them infinite in duration to get acceptable performance.

Another operation that could be taxing is calculating line of sight (LOS) -
this factors into any action an NPC or player would do. You can't have
monsters attacking players, or buffing allies, through walls.

~~~
crystalis
The problems movement incurs aren't "game mechanics" so much as "3d chatroom
mechanics".

Buff tracking as a performance hit is surprising to me, though. I'd presume
you were tracking monster lifespan - with this and an offset for 'AI
interruptions' (like for chasing players as the buff wore off) you could
recover the buff times and remaining durations without touching the monsters
when they were out of player LOS. I can see LOS as a more taxing system, but
monster LOS has no z-axis and the dungeons don't seem to require too many
nodes for a pathing graph...

At any rate, most MMOs have rather... shallow... gameplay mechanics, and I
don't think it's really computation that's holding them back.

------
jaxn
This makes me feel a little better.

We created a multi-player rpg for mobile phones at startup weekend a few year
ago. It actually did ok and it was growing pretty quickly. I hosted it on
AppEngine to start and the resources just started going sky high. Players were
initiating an attack at least once per second (not to mention all of the other
server calls like messaging, weapon purchasing, scanning to see who is in your
area).

I thought we were going to have to reengineer in order to try and make it as
efficient as possible. The thing is, these addictive multi-player games that
are all about throwing data back and forth end up taking huge amounts of
resources.

We eventually ditched the game, but the site is still up at
<http://killyourneighbor.com>

~~~
iampims
That'd be great if you could elaborate a bit on how appengine wasn't fit or
too expensive for your use case? Was performance an issue, or just costs?
Thanks.

~~~
jaxn
Just cost.

There were some issues with a high number of writes, but it has got better.

The unscheduled downtime also killed us.

But really, it was just the sheer amount of traffic. Real-time multi-player
games that people have with them all the time can just create a ton of
requests.

~~~
iampims
Thanks for the reply. Do you think another hosting environment (factoring in
the administration costs and all) would have been cheaper?

------
antirez
Apparently this are 1,000 _ec2 instances_ not real servers. This changes a lot
of things.

Btw how is it possible that 10 or 20 years ago using megaflops or MIPS (or
other sane measures of computer power) was normal and now we are here talking
about "servers"? :)

Edit: I'm searching around the internet if there is some new general measure
of computer power that is able to reflect the real-world performance in non
scientific applications. No luck so far.

------
swombat
1000 new servers a week is a ridiculous number for this sort of company. Their
code must be really, really bad. Anyone who works there care to chime in? Is
it all written in ColdFusion or something?

~~~
portman
Downvoted for not thinking through the numbers.

When Zynga launches a new game, they can get over 1M new DAILY players on that
game within the first week. [1]

For their games, it seems that 7% of daily players are online at any given
time. [2]

So within the first week of launch, you have, about 100,000 people playing,
simultaneously.

Let's say the client is sending 5 requests per second to the game server. And
let's say that this week, all 1000 servers went to the new game.

Now, of course that's just an AVERAGE across the day. In reality, there will
be a cresting factor, like any distributed system, and sometimes the load will
spike, say, 8x.

Is it ridiculous for a single server to process 500 requests per second on
average and 4000 requests per second peak? That seems pretty standard by my
book.

[1] <http://techcrunch.com/2010/06/14/pincus-frontierville/> [2]
[http://techcrunch.com/2010/08/17/zynga-launches-first-
locali...](http://techcrunch.com/2010/08/17/zynga-launches-first-localized-
game-debuts-chinese-version-of-texas-poker/)

~~~
Retric
An average of 5 requests per second from a casual game's client is a sign of a
poor design. Realistically you very close to 1 user action = 1 request. Now, a
top tier starcraft micromanager may average pull 300 actions per (ed: minute),
but an average user should be somewhere around 1 action per second to 6
actions a minute.

PS: The key to pulling this off is running the same simulation on the client
using the same random seed. User clicks attack blob with fork. > Client says:
attacked blob at (timestamp, with fork) > Server says: blob took X damage at
(timestamp) random seed at (value) > Client shows: A long string of actions
that adds up to that value and or requests a new game state if something does
not add up.

~~~
kingkilr
> 300 actions per second

You mean per minute, right? Either that or I'm way worse at SCII than I
thought.

~~~
Retric
Ops, fixed.

------
IgorPartola
How does this compare? How many does Google/Amazon/Facebook/Dropbox add a
week?

------
c1sc0
Anyone else think this is a sad use of both human and computing resources?
Technically interesting, yes, but all of that for _playing freaking
farmville_?

~~~
portman
Farmville is a perfect example of the self-actualization tier of Maslow's
Hierarchy of Needs. [1]

The people playing Farmville are enjoying themselves and expressing
themselves. Would it be better if they deployed that cognitive surplus and
attention towards Wikipedia or a similar project? Of course.

But it doesn't make me sad that there are 215 million people in the world who
have a comfortable enough life that they can afford to dabble in some casual
games. In fact, I hope that in 100 years we can have all 7+ billion humans
striving for self-actualization instead of food and water.

[1] <http://en.wikipedia.org/wiki/Maslows_hierarchy_of_needs>

------
xtacy
Does the 1000 servers a week also include new servers that replace old failed
ones?

------
nkassis
That doesn't suprise me, I was running a network for 150 computers while in
college and one of the computers was always doing a crazy amount of download
and upload (order of gigabyte a day). Our IDS flagged it and I started
investigating what was going on. I finally figured out that the users was
using a flash game site. I couldn't believe a darn stupid game could cause
that much traffic but we asked the person to stop for a day and it was exactly
that.

------
robryan
You'd think this stuff could be pretty efficient if done right, defer as much
as possible to the flash application. I guess security is an issue if you do
to much processing the the flash, probably a lot of servers to just to serve
the flash.

~~~
weaksauce
If the flash is not changed very often why not a CDN for the flash delivery
and then have the flash poll their servers with the farming actions, ect....

~~~
owyn
I'm sure they use a CDN for delivery of the flash assets. The flash is still
most likely updated several times a day.

We used to batch commands in the client and submit every 6 seconds. It was
still a crazy number of writes, because basically every click is a database
update. MySQL is really bad at that sort of work load, but I migrated almost
everything to Redis. Really fun problems to solve!

------
kno
1000 servers per week number is not just realistic. I will like to believe the
author misspoke otherwise it will take more that Zynga employees just to
configure 1000 servers in a week time.

~~~
lukev
1000 servers is not at all unrealistic if you've automated the process.

~~~
d2viant
It's likely fully automated since it's running on EC2

------
greenlblue
I read this and thought what a waste of electricity. Putting all those
processors to work on folding proteins would have brought about so much more
social good.

------
ashitvora
When I first read this on TechCrunch, I thought it's a typo.

