Hacker News new | comments | show | ask | jobs | submit login
Wordpress site running on Raspberry Pi (vo3.net)
99 points by jstalin 1610 days ago | hide | past | web | 36 comments | favorite



Headers: W3 Total Cache/0.9.2.4

So he's serving a static page 90% of the time? I'd love to see how this thing operates without a cache.


Running

    ab -n 1000 -c 20 http://pipress.vo3.net/wp-login.php\?action\=lostpassword
Seems to have killed it. The site came back when I killed the process.

Sorry jstalin but I had to do it! For science!


For science lots of people will probably try to replicate your results, as good scientists should.


Haha, I liked this conversation! @jstalin I'll get my raspberry pi delivered in a few days, I'll let you know if I can figure out the varnish issue. Meanwhile, you can also submit a ticket on Trac https://www.varnish-cache.org/trac


The quickest way to get an answer regarding varnish issues is to ask on the irc channel.

irc://irc.linpro.no/#varnish


This seems like a permanent issue and should be reported as a bug rather than an IM.


No worries, I support science.


Isn't that how WP scaling works, though? It creates static pages.


Not by default and especially not once you start adding any sort of add-ons. A typical wordpress site will hit your MySQL database a TON.


I'm not familiar with wordpress but does it require that you use mysql?


Yep.


This isn't really accurate. WordPress supports overriding the database class. Here's an example of a plugin that supposts Postgres http://wordpress.org/extend/plugins/postgresql-for-wordpress...


Pretty much every good Wordpress install involves setting up something like WP Super Cache which generates static pages for your webserver to hit instead of the PHP pages.

Most blogs lack any dynamic content (except the admin section) and can just regenerate the pages when a new post is made or new comment added.


the login link, which is probably not served from cache is VERY slow for me, so that's probably your answer.


Also, no images. I had to double check because I saw the drop shadow border but it looks like it's all CSS.


I'm working on a project where I have a rails site running on a beaglebone, it is populating its database with data coming from a usb device (~200 byte/s continuous). I am finding that the SD card with all of the data and the OS fails quite quickly (scale of weeks). Are SD cards just not up to the task?


Yes, SD cards are not designed as disks. 'SSD' drives work by selling you 4 - 10x the amount of flash as 'advertised' and replacing failed pages from the excess over time. They are more like light bulbs than switches in that way (finite lifetime).


I figured as much, thanks for the input. It would be wise then for the OP to make sure that noatime (turns off date accessed) is set in fstab for the filesystem, you don't want to write to that SD card on every read.


Thanks, that appears to be the default on the wheezy image they distribute.


Chuck - can you point to more articles or first hand evidence of lifetime testing of SD cards? I am very curious.


Well, Intel used to have a service life comparison for flash up on their site but I can't find it now :-(. There was also a great article on it in EE Times which has also faded apparently. SanDisk in its 2004 document [1] re-iterates the 100,000 writes limit.

Typically flash is broken up into 'pages' and a page can be as small as 8K bytes and as large as 128K bytes, there are reasons for doing it different ways (mostly related to write latency) but if you consider an 8GB flash with 128K byte pages, that is 64K 'pages' (8G/128K) And if you wan to write one byte on that 'page' you have to rewrite the entire page.

San Disk and others will typically spec that a single 'page' can be written 100,000 times, and wear levelling software insures that no single page gets written more than any other page. So you can expect a total of 100,000 * 64K or 6.55B writes before you see a failure.

If you look at typical 'disk' sorts of statistics you will see that a typical SATA drive will do something like 80 - 100 IO operations per second (IOPS) so writing at 80 IOPS you would take nearly 23 thousand hours to wear out an 8G flash. But at Flash speeds (10,000 IOPS) that could easily drop to a mere 182 hours (which is why you don't see a lot of 8G SSDs)

But this is where the math gets fun. The card is spreading writes over total available pages, so assuming 128K pages and 100K writes per page and 500 IOPS (somewhere between a SATA SSD and a spinning rust drive, and well within the rate possible using USB channels or SPI) that kills off 128K bytes every 200 seconds.

Writes get flushed out to disk right away to keep the file system consistent and so you get more writes than you might expect. And writing one byte to a file, not only changes the inode holding that byte, but it also changes the inode holding the directory entry which has the length and modification time (assuming you've disabled atime). So every write from user land can be 2 writes to the device, and sometimes more if it results in overflowing a directory block.

Other variables include how effective the write leveling works, the card has to 'remember' its leveling. An early version I saw wrote a generation number into a page header (written with the same page) and then managing the map. On cheap cards the map (from the linear space presented to the user to the random space of the actual flash) I've seen code which basically mapped each page to 16 candidate pages and then linearly searched amongst those 16 for the next page to write. This had the effect that the single page write endurance lifetime was shorter than the aggregate device write endurance lifetime. (back when I was doing my embedded OS work I was using JTAG to read back flash state directly from flash drives (not the SD cards but the early drives offered for sale as replacement disks)

Anyway, like most things, its not immediately obvious what factors come into play and its easy to do things that screw you (like doing atime writes).

[1] www.flashgenie.net/img/productmanualsdcardv2.2final.pdf


Kingston lists the write cycles as 3000-5000, this would shorten your calculations by at least 20x.

http://media.kingston.com/pdfs/FlashMemGuide.pdf

Also, see this thread:

http://www.raspberrypi.org/phpBB3/viewtopic.php?t=21281&...

Edit - The Kingston data is a bit dated but in any case 10000 is probably a better number for cheaper cards.


So I did a non-scientific test on my Raspberry Pi at home with a 4G card that has typically been my 'move an .iso around' it is a 'class 10' card from Microcenter Warehouse. Using this perl program:

   #!/usr/bin/perl

   use strict;
   use warnings;

   my $letters = "ABCEDEGHIJKLMNOPQRSTUVWXYZ-";
   my $range = length $letters;

   while (1) {
       foreach (1..16) {
           my $nm = "A";
           foreach (1 .. 15) {
               $nm .= substr($letters, rand($range), 1);
           }
           open(my $fh, ">", "$nm.delete-me");
           foreach (1 .. 128) {
               $nm .= substr($letters, rand($range), 1);
           }
           print $fh "$nm\n";
           close $fh;
       }
       `sync`;
       `rm *delete-me`;
   }
Killed it dead in 3 hrs 18 minutes. Your mileage may vary.

May not be 'first hand' enough though.


Nice work, thanks. I can see now how the cloud will be important with these small devices. I wonder how OUYA (and similar devices) is going to manage data and what the lifetime of the storage will be.


FWIW, in normal operation my Raspberry Pi mounts everything but the root volume from NFS. Very few writes are left to go to the system.


Wow. Front page of Hacker News and it's still up. (granted, it's only been 19 minutes...)


Most people who have their site go down aren't using proper caching so their Wordpress sites are evaluating the PHP and querying the MySQL database with each page load. This is extremely wasteful of processor cycles and memory, hence the site goes down when the server gets overloaded. The proper way to do it is to have caching in place such that it evaluates once and saves the result as a static HTML page. Then the static page can be served very quickly and efficiently.

It looks like this is what he is doing. I'd be willing to bet the site will stay up.


If it survives does that mean that the Pi can essentially saturate its network connection if it doesn't need to do any "real" work? I bet you could stick it in a rack in front of the grownup servers to act as a haproxy dongle, how hard can it be?

I officially dub it the cutest webserver.. in the world.


> Pi can essentially saturate its network connection

There are a few caveats here. If you're serving the same content ( < 512MB ) and it's static, then yeah, why not? Serving static content isn't CPU intensive and you're serving directly from RAM.


Let's not forget the caveat of network speed (4MB/s tops), and the other caveat of alleged stability concerns when saturating the network link AKA USB system.


Yeah, I'm curious to see how long it lasts.


Looks like forever. If it's not taken down by the hit rate from a top slot, it won't be taken down. It's gotten slower, but it's alive.

Nice work.


It's down as of 11/30 11:30PM CST.


Investigating now... looks like it went down at 12:26am eastern time for no apparent reason...


Nice. With Redis and a simple script I wrote I load WP in a few milliseconds: http://www.jimwestergren.com/wordpress-with-redis-as-a-front...

He should try it, as he is getting errors with Varnish.


I test your script and compare it on my LEB with my common setup.

- OpenVZ 128MB RAM

- Ubuntu

- Nginx

- MySQL

- PHP-FPM

- APC

- Varnish -->> Redis

At 250user/second here's the result : http://imgur.com/a/hPkyZ

This is vanilla WordPress installation. There is almost no difference in load time or overall resource usage as far as I'm concerned. I've never use Redis before so thank you for this.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: