The solution to this is to have one "real" PC acting as a NAS and serving iSCSI. You would have to divide the extra cost of this PC across all the ARM servers you have.
In order to extend their lifespan, you might want to minimize the writes by disabling some logs and the "last access" updates on files (`noatime` for "no access times").
What site do you get your OS images from?
In other words, "enterprise" is more or less exactly the same stuff, but at a premium. Price discrimination strikes again.
I do nightly backups off the box, and if the machine does die, I'll post down a newly configured micro-SD card for insertion before doing much investigation.
Has anyone actually compared x86 and ARM doing some sort of comparable task, and measured how much power they both consume performing the same task? I know ARM uses a lot less power than x86, but if it uses 10 times as little power but takes 20 times as long to do the same task as an x86 server chip, it still uses more power. I'm not saying it isn't more power efficient than x86 (it certainly has that potential, because of all the old cruft in x86), but I just haven't seen any evidence that supports this claim.
Or am I missing the point? Is power efficiency not the main reason for ARM servers?
The other possible target is embarrassingly parallel work loads where you really just want to cram as many cores into a given space as possible. Usually the limit of how many cores you can put in a rack is either power or cooling, not space.
Where you probably won't see ARM in the near term is on work loads that are highly single threaded and performance critical. I think this is exactly the type of situation your describing.
HTML version at http://bit.ly/KOhENi
And note that there are Intel solutions in the "cheap, low power" market too. There are ~$80 Cedar Trail boards (I forget the part numbers off-hand) which take 2GB of DDR3 memory and run on 15W or so at full CPU utilization.
That's all nice stuff, but just adds to the power and price for what should be a dirt cheap, almost disposable, piece of equipment.
Though I would definitely appreciate gigabit ethernet on the beaglebone...
If you wanted a dirt-cheap, more modular component you'd be looking at Arduino or Raspberry Pi, not a plug.
They're running Linaro, the Ubuntu-ARM-development-fork by ARM/Linaro.org
Is there any browser/browser & extension to remove that... stuff? Maybe an adblock filter, a greasemonkey script or something similar?
I've disabled the AddThis bar while I look into it (the caches might keep it alive for a while though), hopefully there's a suitable solution.
They are getting sent, just slowly.
What are you seeing when you visit the site?
Here is a screenshot, with my I.P. address at the bottom:
It looks scam-y too with the "google pays me $x a day" and "automated profit package" ads.
P.S.: I remember seeing it on a blog post on jgc.org at least once, as well as on joyoftech a ago but not today.
You are seeing that page because CloudFare believes your IP address is behaving badly, for whatever reasons. ( see https://www.google.com/webhp#q=22.214.171.124 )
Instead of outright blocking all traffic from known bad IP addresses, they have a mechanism to let actual users go in. That mechanism relies on a captcha flow, and on setting a cookie in the user's browser to bypass the IP block.
Disclaimer: I am inferring all of this from your screenshot. CloudFare's actual process and intent may vary.
It'd be a bit like sending out a hard drive component for an engineer to replace, when the drive itself costs $100, it's just not worth the effort for 99% of installations.
Let's say that a hard drive (or whatever storage device) has a failure rate of 1% over 1 year. With traditional hardware, for every 100 computers you have, you'll replace 1 hard drive. A new hard drive will cost a fraction of the machine's initial price, let's say 1/5. So maintenance costs for storage devices are 1/500 the initial costs per year, maybe a bit more if you factor in the cost of the labor.
Now if you make the same cluster of servers out of ARM hardware, let's say you'll need 4x the number of machines to get the same processing power. That's 400 machines. If the storage devices on these machines have the same failure rate, you'll need to replace 4 machines per year. However, since you're buying a new machine, you're don't get to pay for just the storage device. It costs you 4/400, or 1/100 the initial cost to maintain the storage devices for your ARM cluster per year.
A huge assumption here is that both types of hardware have similar initial costs. So the point here is that it'll always be cheaper to compartmentalize your losses, unless ARM devices are much cheaper than traditional commodity hardware. Intuitively, throwing away a whole machine when something breaks is going to be a lot more expensive than just replacing the broken part. I don't imagine that it's extremely difficult to make parts replaceable on ARM boards, and it would definitely save some money, so I don't see why it shouldn't be done.
(Also, I realize that the Beaglebone uses SD cards, which are cheap and replaceable. But imagine that instead of storage devices, I'd used memory for the example)
The Intel cores on the Aubrey Isle chip are fairly large and take up most of the silicon on the die. An ARM-based design would be much smaller and cheaper to manufacture.
Now that I said it, I wonder how much more expensive would be a RAM chip incorporating an ARM core versus pure memory. It would be interesting to have "smart memory" that could do things like "sha-384 this range and ping me back when you're done". Assuming other threads are not using that same component for other activities, it could be done basically "for free".
Though really you can get a 256Mb VPS for £5/month http://prgmr.com/xen/ and you don't have a setup charge or unreliable hardware.
Then you have to remember that ARM is presently 32 bit, so you're going to have trouble going over 4 GB at all, which is currently a very small amount for a serious server. 64 bit ARM isn't going to be widely available for at least another 2 years, and at what cost we just don't know (but it's not likely to be £60/server).
md5sum of 3.4 kernel.tar.gz (average of three runs)
512Mb Slicehost 2.2Ghz Quad-Core AMD Opteron(tm) Processor 2374 HE
= 0.275 seconds
512Mb BeagleBoard XM 1GHz Cortex A8
= 2.023 seconds
is 0.7 seconds on the beagleboard and slicehost is more variable but comes to down to 0.5 seconds after a few goes from (1 second). I'd expect it to be read from the page cache anyway.
However, I remembered that the beagleboard is using busybox and really the systems are very different from each other, different distribution for example.
Update: busybox md5sum is about 20% slower than regular md5sum.
I've run a fan-less EPIA board at home for a couple of years, and it's nice. Took some time to find the right board though..
I would have commented on your blog but on Firefox the "about me" floats over the disqus column so I can't.
It's obviously not good enough. Will see about sorting it, thanks.
Agree with that. I use a pandaboard as a build server and I felt it is a lot efficient to use NFS than running off an SD card.
Ewan, have you tried USB HDD instead of SD card?