Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nvidia Project Denver: ARM Powered Servers (mvdirona.com)
38 points by yarapavan on Jan 16, 2011 | hide | past | favorite | 22 comments


I predict that this is primarily going into a Cell like architecture -- (relatively) pissweak main MPU surrounded by a bucketload of kickass specialized processors (in Toshiba/Sony's case, beefy vector units, in Denver, super high end nVidia GPU cores) on a v. fast memory bus. The presence of Windows for ARM is interesting, too -- I wonder if we're looking at the beginnings of XOBX 720 here.


One of Nvidia's arguments is that there are few HPC applications that benefit from >4cores that don't benefit much more from a good GPU implementation. Therefore, an optimal configuration is to have a reasonable 4-core CPU with really good serial performance/watt directing GPUs with amazing parallel performance/watt. That design definitely echoes the Cell concept.


Doubtful. Specialized graphics processor combined with a weak CPU makes sense in a console where graphics generation and display is core task.

In a server context, GPUs don't make sense. They won't make your web server or a database or Ruby any faster. In fact, in a server a GPU could be stripped off completely to save cost and power without impacting the performance.

Programming a GPU or other specialized processors requires custom code. You can't just recompile your database written in C or Java to magically take advantage of them. And since people can't afford to rewrite the whole software stack to take advantage of such capabilities just to serve web pages, it's not going to happen.

As the article says, the main reasons those chips might get traction is low power (and possibly low cost) since the cost of power is very important at scale and ARM is better at power management than Intel.


That's a key issue; strong GPU + weak CPU is great for just about every market other than general-purpose servers. So is it worth it for Nvidia to build a separate ARM-only chip just for the general-purpose server market? (especially considering that Marvell and Calxeda are already targeting that market)


I don't think Denver is going to end up in many general purpose servers regardless of what the press release might say.


I agree that they're not targeting the Linux VPS/Big Oracle Machine market; this design doesn't make any sense. The power they're saving on ARM MPU they're giving right back with multicore nVidia GPU. There are significant HPC applications that leverage CUDA that would run very nicely on this design; and you can see a nice dual/quad core low power ARM MPU with two or three low-powered GPU units being an attractive option for tablet/phone manufacturers, as well.


Longsoon comes to mind (it has the MIPS dialect of RiSC, with hw qemu x86 accel., dedicated DSP-like additional cores with FMA etc).


What happened to SeaMicro? http://www.seamicro.com/ I remember there was a boom with their PR and then silence.


One might speculate that they're working on an Armada XP-powered version.


I wonder if "managed" cloud hosting services like Heroku (Ruby), Azure/AppHarbor (.NET) and AppEngine (Python/Java) could deploy ARM-based data centers to save power, since their platforms are basically processor-agnostic.


Even processor agnostic programs will run poorly on a GPU. GPUs are about doing very basic tasks very quickly. CPUs can do more but are slower. Most computer programs take advantage of the extra features in a GPU and would not be able to run on a GPU.


What do you mean? ARM is not a GPU.


If this doesn't work out at least some Atom servers would be nice as well.

It would be great for datacenters that need to save cost on energy and are IO bound, like those saving email/messaging with an already underused cpu anyway. I wonder what the maximum memory would be for these new servers.


Even if MSFT ever sell an ARM port of Windows Server and if you are running big server tasks (Oracle, SQLServer, IIS)etc then you will still need Intel for the horsepower.

If the servers are just low power, high efficiency file servers then every NAS already uses an ARM core to run Samba


Assuming the ARM servers can beat Intel in "work per joule", which I think they will, then they can win in what is now the VPS space.

I use virtual private servers from three different vendors for a number of projects, and they make a lot of sense from the cost standpoint, but they have a huge drawback that you are sharing resources with others and their workloads impact your abilities (plus any one of them could be a vector to introduce a hypervisor exploit and compromise the system). I'd much rather have a small computer to myself.

Imagine a 1U system[1] with 32 independent 1GHz ARM servers[2]. That hits about the same power use as a modern intel machine, has an amazing memory bandwidth by comparison, and if you rent them out for ~$8/month each (smallest slices available now) it is a money farm.

Systems like this will also work for problems that scale laterally. Pushing that a bit, imagine if a CDN vendor started putting racks of these at their strategic locations, you could improve your application's response time by hosting nearby, handling locally what you can, and bundling the heavy lifting back to your big servers in an efficient way.

[1] Or 2U if space isn't at a premium. They are easier to cool. [2] Disks go elsewhere, say across a 1gbps ethernet switch to a SAN. Build in the switches and you don't need the magnetics for the ethernet, there are clever capacitive solutions.


Actually the issue is maximum amount of RAM, not CPU power. 2x 12-core Opterons gives you 24 2Ghz cores out of the box, with up to 128 or 256GB of RAM.


ayup. I've looked into renting out small ARM servers rather than VPSs, and the biggest thing stopping me is that the ram is not socketed, and all the boards I can get are made for client boxes (e.g. not enough ram, and too much video hardware driving up the cost.) I mean, in the hosting industry, we expect to pay off our hardware in something like four months, and unless you are a lot better at marketing than I am, it's difficult to charge a whole lot more than $20 per gigabyte of ram per month, so the total cost for the unit (including cpu and disk) has to be around $80 for every gigabyte of ram or so. right now, the panda board looks like the best choice, and even before disk you are looking at almost $200 for something with a power supply, etc..

Also, nobody makes a reasonable power backplane (so I can power 10 or what have you of these little pandaboards off one power supply.)

If these things took DIMMs or the like, the ram problem would be solved. (Really, I'd want ecc, which isn't usually available in SODIMMs, but I bet there are enough people who don't care to sell such a service even without ECC.)

But for now, virtualizing larger servers is a better idea. If you are that concerned about others stepping on you, it's possible to dedicate a disk to a particular virtual server alone; that would solve the biggest resource contention problems that come with virtualization.


That has nothing to do with the core, so there's no reason why an ARM server couldn't support the same capacity.


That's exactly what people used to say about x86, with some combination of IBM, DEC, Data General, Control Data, HP (PA-RISC), SGI (MIPS), Sun (SPARC), etc. substituted for Intel.


"...Project Denver where they are building high-performance ARM-based CPUs, designed to power systems ranging from “personal computers and servers to workstations and supercomputers”."...

It looks like there targeting everything from mobile devices to the datacenter. I wouldn't be surprised to see ARM server's running 'big server tasks'. ARM is a huge threat to Intel and the potential power savings in big datacenters should add up to quite a bit.


I get your point re:SQL Server & Oracle (though not for all DBMS types) - but why do you need horsepower to serve web pages?


I could imagine there could easily be a big array of ARM cpus on the server system; not a single one that is on par with Intel's fastest chips but they come in huge numbers and are presumably much cheaper.

And it seems that since they will integrate their GPU on the same die then it could provide massive parallel computing power as well via Cuda.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: