That said, I worked at NetApp when WoW first released and their architecture at the time was these blade servers talking to oracle database instances that were using EMC SAN boxes for storage. We tried to convince them they would have less down time if they converted to Oracle over NFS as Oracle had done in their big data center in Texas.
As an engineer it started me thinking about the whole 'world as a database transaction' sort of model of things. How that got built, where did latency matter, where did it not? What could you un-do and what had to be at-most-once. And then the scale of that with respect to localizing transactions when actors (characters) were within scoping distance of database changes. Quite the interesting challenge.
Oh and the 'weird custom board' on that blade is a compact flash to IDE adapter. The blades booted from compact flash.
EDIT: Hmm, not CF adapter, going back and looking again I think that was the NVRAM card which allowed them to recover transaactions after a server crash.
I can only imagine what the bill for that setup looked like. Did they jump all in with a Symmetrix?
>'It was the only game where the social aspects were as fun, or sometimes more fun, than the game play.'
Personally, the social aspects are what eventually drove me to quit MMOs for good (so far) not long into the life of WoW.
It seems to me that a lot of people look to MMOs for escape and a situation where the game world is simultaneously one person's preferred life and another's play time is a recipe for trouble.
Likewise, though for a different reason. I never engage enough, even in a guild or similar, I guess because I usually end up playing MMOs without RL friends jumping in, and I find the limited interactions sufficiently boring I eventually leave the game, as the gameplay isn't enough in any MMO to keep me.
You can level pretty quickly just by soloing, so you don't need to find a buddy to help. With the Dungeon and Raid Finders, you don't even need to join a guild or have a set of regular friends for tackling harder content.
With the battleground queues, you don't need friends to participate in PVP.
Unless you want to attempt the harder versions of the same Raid content or try to compete more rigorously at PVP, all of the classic reasons to meet and keep friends in WoW are gone.
The advantage is that the game is a lot more approachable for the novice (free to play up to level 20), but OTOH, it moved a lot of the challenge to raiding and end-game, which a fair number of people have no interest in doing.
The 'weird custom card' mentioned in ChuckMcM's comment is actually a SmartArray 6i raid controller. The black slot is to put a small stick of ram into to act as cache.
Happy to answer any and all questions about the server :)
Out of interest, there are two CPUs in this server, and they're both an AMD Opteron 275 (2.2 GHz, dual core). The 512MB Hynix ram sticks would have come from the factory, while the 2GB micron sticks would have been an upgrade.
Maybe someone from Blizz can even tell them what the server's role was...
I saw your blog post about your WoW blade, via Hacker News, and couldn't help but want to contribute.
I have worked as a sysadmin and hardware specialist for the past 5-6 years, with a lot of experience with HP equipment - I was wondering if you'd like me to help provide better descriptions of any of the parts of your blade? Extra detail like the fact that Hynix ram came in the machine from the factory, but the micron ram was added as an upgrade. Your machine probably only ever had those 6 sticks in it, as the next generation of servers used newer DDR2 and wasn't compatible (hence no reason to remove ram from this blade). That blade is what is called a 'half-height blade and it would have sat vertically in a Blade Enclosure, ie, the HPBL25p text would be horizontal. Blade enclosures usually hold up to 16 half-height blades, or 8 full height blades, but yours is a p-class blade, so it would have been in an enclosure for half-height blades only (8 blades max). The network ports and all other connections are on the blade enclosure. If you'd like a full spec sheet on your blade and the enclosure, you can look at this PDF from HP: http://h10010.www1.hp.com/wwpc/images/ap/BL25p_v7.pdf
Regarding your concerns about the CPUs and heatsink paste, etc: The CPUs will actually be attached to the heatsinks and you should simply be able to lift them out as a unit. This is done so that if a CPU were to fail in the blade you can replace the CPU in just a few minutes. The detail of which CPU you have is more than likely attached to the underside of the heatsink as well, next to the processor, but you can actually tell which one you have from the model number (that sticker that says 392439-B21). It's an HP ProLiant BL25p 275 2.2GHz-1MB Dual Core 2P 2GB Blade Server. This means your CPUs are AMD Opteron 275s, and you have two of them. The '5' in BL25p also indicates an AMD CPU. Intel servers end in a zero.
I can also tell you what most of the bits of hardware on the motherboard do too, if you like.. I'm happy to annotate photos. The large green card towards the rear of the server with the little silver heatsink and the empty black slot is the hard drive controller. In this server the model would be a SmartArray 6i. The slot is for a small stick of RAM (128MB for this model) that the controller would have used for cache - it probably never had any installed though.
As general information: the little 'add in cards' are called daughter-boards, and are actually completely normal in servers, especially of this size, partially due to space constraints. The main reason for daughter boards though is so you can quickly and easily replace a failed component. Servers are generally designed to be easily and quickly serviceable. I've personally replaced a server motherboard in under 10 minutes (from power off to power back on again). It's generally all tool-less and extremely modular.
The clear magnetised lid on these server is definitely not standard - Blizzard must have added this when they decided to memorialise the server. The standard lid would be metal and held on with a quick-release lever mechanism.. I really like the magnetised approach though :)
I can keep going on, but yeah, let me know if you'd like any information or more insight into the server. Glad to see that it's in the hands of someone who obviously cares about the equipment though :) it's nice that they made them into a collectible rather than just selling them to a used equipment vendor.
OS, tools, programming languages, how did the different parts of the software (such as, again, continents) communicate between themselves... For example, I was told once dungeons and raids were/are scripted in Lua.
On launch there was 2 continents, but 4 server blades per server.
Scripted events take place in Lua. But most raids until I believe Cataclysm weren't scripted so much as it was just mob abilities + cool downs.
Note: Not a wow server architect just played way to much WoW
What I mean by this is that currently the only way to have lots of players playing at once in the same realm is to buy very expensive hardware. That plus the amount of DoS attacks you receive if you are popular means that having a private WoW server online is anything but cheap.
I was always more impressed with Asheron's Call management of areas, as they initially had one zone friend world, it would load land blocks as players accessed them.
I've done a lot of reverse engineering of World of Warcraft as I worked on its internals for years. If this is an interesting subject I might write about it. Anything specific people would want to know?
I've mostly learned about the internals of the WoW client itself and how it interacts with the server. I suppose I will eventually write about it when inspiration strikes me, but I'll be happy to answer any more specific questions you have. It's a very broad subject.
>OS, tools, programming languages, how did the different parts of the software communicate between themselves...
I don't know the details of the WoW software architecture but I know there were a lot of weird side-effect bugs in the game.
My favorite weird bug was the one where you couldn't craft a stack of items (like bandages for example) when you were wearing 5 pieces of your class specific items set. So rather than crafting a stack of 20 bandages, you'd have to craft them one bandage at a time if you were wearing 5 pieces of your class set. If you took off one piece of gear you could craft stacks.
I spent a bit of time wondering what type of design would cause that sort of bug....
There's a lot of detail to go into in this field, suffice to say that you can easily make a career out of knowing how to design efficient ESBs.
Obligatory Wikipedia link for more info: http://en.m.wikipedia.org/wiki/Enterprise_service_bus
They said it was more SAN + a centralised Oracle DB. Which is a pretty common approach. The Oracle DB keeps the different blades in sync but also allows trivial data transportation.
XML seems unlikely unless they were storing XML within Oracle but that seems odd/insane for their use case. It might make sense if they were sending data geographically but as far as I know WoW realms are in one location only.
The part I'm talking about is the behind-the-scenes communications layer for the application itself - how you pass information between servers in real time with low latency.
Think of things as a stack. You have the top layer which is the application. It runs on an individual server as a process, or set of processes. You then have middleware, which is software that handles communication between applications, whether it's on the same box, or between servers. Middleware is also responsible for handling communication with the database (oracle in this case possibly). This is the database or information storage layer.
Ie: application layer --> middleware --> database.
This middleware is the portion I'm postulating could be using XML, based on my general enterprise experience. It's not stored anywhere, it's simply a transport mechanism for data between applications when you don't need to store it. More than likely it would contain information such as 'this user is entering the battlegrounds, please create a slot for them with these details', or 'this player just caught the boat to another continent, I'm handing them from me to you and here is all their information'.
XML is great for this kind of 'live' data transportation, although I probably prefer json these days..
What are your savings doing it that way? It just seems to cost more of everything (resources, man-power, etc).
It also helps reduce overhead and latency. Imagine if you had to constantly write the actions of users to a database in real time for 50000 users, including all the metadata, just so your servers could communicate amongst themselves. The latency involved would be huge, even with today's vastly-better equipment. A fast middleware layer greatly helps to cut down on the data that needs to be written to the back end system, while also improving latency, for what's really not a lot of extra complexity. Two birds with one stone, so to speak.
This approach also helps you scale - you can treat everything as a node, with the middleware data as distinct messages. Messages are created and consumed from queues, and so you can have multiple application servers and multiple database servers, with applications publishing messages onto queues for processing, with the database servers consuming these messages if they have the resources to do so. Adding more capacity to the system then becomes as simple as creating a new database server and telling the middleware it exists - the extra resources are automatically used.
Most enterprises work this way.. The basic outcome is that you can start treating the components as a service, with easy separation of duty.
There's more detail here: http://en.m.wikipedia.org/wiki/Middleware
I expect they use a hybrid approach - some monolithic, some message-bus.
My lack of foresight has led to it being stuck, still in the original shipping box, in my parent's basement.
I'll probably sell the server at some point but I have no idea what a fair price for it is. Perhaps I'll sell it using a double-blind auction.
How does EVE online work by the way ?
I'm fascinated by this stuff. I wish there was an unique realm MMO game like wow.
I guess EVEO is unique level.
I'd love to see a game using more p2p architectures to enable small parties to offload servers.
Worth a read
Same, I'd be most interested in seeing how WoW was initially built to mitigate the lag that plagued MMOs at the time. Handling that well was key in my decision to play the game at all since it was critical for PvP play.
>'How does EVE online work by the way?'
Here's a pair of older articles from my bookmarks about engineering of EVE.
>'I'm fascinated by this stuff.'
Yeah, it's a world apart from your standard web site / services model.
The WoW 'cross realm' zones are also the "instanced" zones so while they are cross realm they actually hold much less data and people than the single-realm parts. Even the 'cross realm' areas are still restricted to a "battle group" of realms of 5-6(?) realms.
-server code still in python (single threaded)
-one cpu thread per solar system
-during big battles with >2000 people server slows down time(tickerate) and makes game unplayable
I'm not sure there's any hard and fast definition.
Generally, a blade is going to share power at least, likely network uplinks as well. Diskless configurations are common, with some sort of shared storage being used, but local disk is finding its way back into blades via SSD.
However, it's important to note that around this time Blizzard was heavily improving their Battle.NET service and launched a download service where you could register your CD keys from their previous games such as Starcraft and Warcraft III and download a fully working ISO that did not require the physical CD to run. The server purchases may have gone towards that instead.
Would love it if Blizzard wasn't so secretive about everything, especially their tech. CCP, makers of Eve Online, have been very forthcoming in talking about their entire stack in their developer blogs and videos - from infrastructure and hardware, to how they profile their code and the software design choices they make.
A few months later the used equipment market was flooded with the same blades, so I imagine they offloaded quite a bit of hardware.
It's not a competition. It's the Internet. We're all the same.