Or, you know, use a cross-compiler?
There might be even stronger reasons to not use cross-compilers such as weird bugs or compiler version compatibility issues.
Many Pi users' only linux system is the Pi.
[EDIT] Vendors do this too. I went to a workshop on the Freescale i.MX line and guess how everyone got the development system? A VMware image on a thumb drive. You don't have to be a hero when it comes to cross-compiling, just get the work done.
I think it would be useful to have a Virtualbox image available with everything set up for cross compiling for the Pi, BBB, etc. The thought of setting up non-standard toolchains seems to put off a lot of users.
I've also been working on a project involving the BeagleBone Black and I saw a sharp increase in users when I began distributing an OS image with everything pre-installed.
Of course not everyone who wants to play with early versions of the new Q3A port falls in that category. But the fact that the Rasbpian developers themselves have steered clear of x86->RasPi cross-compilation suggests that it's not necessarily straightforward even for experienced people.
That's from RPi's mission statement. It's on their website.
RPi had nothing to do with being a cheap Linux platform for hardware hackers and people who wanted a cheap XBMC or MAME box. But now it's 99% that.
Indeed; I didn't suggest otherwise. Mind you, getting families on the Internet for $50 or $300 wasn't really in the original mission either:
"The idea behind a tiny and cheap computer for kids came in 2006, when Eben Upton, Rob Mullins, Jack Lang and Alan Mycroft, based at the University of Cambridge’s Computer Laboratory, became concerned about the year-on-year decline in the numbers and skills levels of the A Level students applying to read Computer Science. From a situation in the 1990s where most of the kids applying were coming to interview as experienced hobbyist programmers, the landscape in the 2000s was very different; a typical applicant might only have done a little web design.
Something had changed the way kids were interacting with computers. A number of problems were identified: the colonisation of the ICT curriculum with lessons on using Word and Excel, or writing webpages; the end of the dot-com boom; and the rise of the home PC and games console to replace the Amigas, BBC Micros, Spectrum ZX and Commodore 64 machines that people of an earlier generation learned to program on.
There isn’t much any small group of people can do to address problems like an inadequate school curriculum or the end of a financial bubble. But we felt that we could try to do something about the situation where computers had become so expensive and arcane that programming experimentation on them had to be forbidden by parents; and to find a platform that, like those old home computers, could boot into a programming environment. From 2006 to 2008, we designed several versions of what has now become the Raspberry Pi; you can see one of the earliest prototypes here."
It was meant to be a direct, hands-on programming environment with nothing valuable to break for programming beginners, thus David Braben and all the BBC Micro nostalgia.
This satisfies people who want to be able to read, understand and hack on the video driver.
It would be interesting to race someone unfamiliar with that toolchain installing it and cross-compiling versus someone just following the instructions.
And it's a one-shot thing, why make people compile the kernel at all?
(Reminds me of a povray benchmark I did some time ago: start the render, go on a holiday trip, wait some more, done:
Once the code has been tweaked there'll probably be an image for it somewhere.
I haven't looked but are there documentations to allow 14 year olds who have never done this before?
Building GCC shouldn't take more than a couple hours on a modern machine, and it's a one-time cost for a drastic speedup of every subsequent RPi build. So the aggregate time should still be considerably less than 12 hours.
Couldn't somebody build a full cross-compiler toolchain as "relocatable" binaries, depending on an older kernel, and then just offer that as a binary download to run on most recent distributions? It's not a typical way to distribute a linux application, but it should work in principle.
Your GCC build has to match the userspace C library (uclibc or other). If it doesn't you'll need to do all the path passing and usually link manually as well.
It has also to match the kernel somewhat (not so critical for userspace apps, but if you want to add a module or something it's critical)
So, basically what you "should do" is build the whole system together (cross-compiler + userspace + kernel), some tools do that, building first a "raw" cross-compiler, then the C library and kernel, then a full compiler with all the options applied.
It tends to only be worth it if you are going to be using it more than a few times.
Yes, I agree 100%, building a cross-compiler is a very complicated task.
But I think with the weight of the Raspberry Pi projects this should have been easier. This is a good project http://buildroot.uclibc.org but unfortunately it's very unstable.
Building a cross-compiling toolchain is hard. Totally understand that. Android  and Linaro  both provide pre-made pre-tested builds you can just pull and use. I recommend using them.
Building the kernel itself. As mdwrigh2 pointed out, this is actually pretty easy. Also the kernelnewbies community is here to help!
I built one on an old Xeon in something like 4 minutes and it took 3x as long to upload/download the files to the system I wanted to use it on than it did to build it.
To give you an idea of how bad the difference is, to compile the Linux kernel in defconfig, using 2 distcc boxes each with AMD FX8320 3.5ghz (8 core) + 32gb ram + 1gb LAN. The x86 stuff boot off RAID1+0, the the PI obviously off its sdcard, here's what happens:
* Compile for armv6 on pi host + distcc compile farm: 19 min 11 sec
* Compile for armv6 on qemu host + distcc compile farm: 29 min 36 sec
* Compile for armv6 on x86_64 hardware + cross toolchain + distcc compile farm: 1 min 59 sec
* Compile for x86_64 defconfig on off-the-shelf x86_64 hardware (also fx8320) + distcc compile farm: 1 min 31 sec
But to anyone intending to compile armv6 crap on a pi or qemu based compile farm: unless it's a one off thing or your time is worthless, don't do it. Invest the time getting crossdev, crosstool-ng etc + dist cc working properly. I only bothered because I was looking for a better way to build stuff for Android and used the setup as a testbed.
It is nice that they released an older chips driver code, it is better than nothing (and better than just programming manuals) but we have companies like Qualcomm with the Atheros driver, Intel with their wifi, gpu, and ethernet drivers, and AMD with their chipsets, gpus, etc all contributing engineers in the kernel to make foss drivers, and we shouldn't give any company too much credit for doing any less than the same.
Which boards are those? I didn't find a single board that had an open video driver - not Allwinner based ones, not Beagle Bone Black (no encoding/decoding hardware at all).
The APIs in RPi that you mention are high-level APIs. Before Broadcom open-sourced the implementation (which they called IDL I think), it was not possible for RPI users to interface a custom camera module with RPi (without signing NDAs with Broadcom).
After some investigation, I went with LeopardBoard. I haven't progressed far on the project , but AFAIK, it and other boards had completely open stacks.
 I am stuck with a simple issue: Not able to get the right serial cable to connect to the board.
> it was not possible for RPI users to interface a custom camera module with RPi
That's correct. But the RPi's camera modules (standard and noir) are very well supported (even if source is not all open), and they work very well - reasonable quality 5MP@15fps, FullHD@30FPS, 720p@60fps, with access to the encoding/decoding pipeline that allows you to e.g. insert an overlay before h264 encoding the original stream (in fact, that's the most cost effective way to add overlays to an already encoded 1080p h264 video even if you don't connect a camera).
I find that it's pretty hard to beat RPi on price, support, functionality/price, community, openness etc. in general - to the point that if the RPi is not the right solution for a project, it is unlikely that any of its competitors (low power sub $100 cpu+gpu+network on a small form factor) is.
I DON'T it was A) Not a driver (They should have made the driver and released it B) Use the actual chipset not one removed chipset and have people hack on it for money.
Definitely not a Pi.
This was the first game that had no software renderer.
All other games (til ~2001) including Unreal Tournament just run fine on a Pentium 133 MHz with a software renderer (exceptions were also Outcast and Ultima 9).
Descent 3 came out a few months earlier, and also had no software renderer.
Hell, I had to run Quake 2 at 512x384. SiN was only playable at 384x288, and looked awful.
the rpi's gpu is much better than a voodoo2 so I can believe 133fps.
source: i cloned orange smoothie pro for jk3
Someone ported Quake 3 to ARMv6 Symbian smartphones in 2008 . I remember running this on a Nokia E90 Communicator and the framerate was over 15fps most of the time.
So yeah, 20 FPS is definitely what I would call playable. ;)
I remember Falcon 3.0 and perhaps some other simulation games having support for FPU for those few who had one however...
Nice work though Simon.
Although I performed many tests I was never able to get the bottom of it.