Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A birthday present from Broadcom (raspberrypi.org)
625 points by 1ris on Feb 28, 2014 | hide | past | favorite | 161 comments


If you want to go straight to the horse's mouth and just start hacking (this is Hacker News, after all!), the Broadcom downloads are here:

* BCM21553 VideoCore IV graphics driver source: http://www.broadcom.com/docs/support/videocore/Brcm_Android_...

* BCM21553 VideoCore IV documentation: http://www.broadcom.com/docs/support/videocore/VideoCoreIV-A...


For those interested, as far as I understand it this release doesn't include VPU or QPU assembler tools.

We released a QPU assembler at https://github.com/hermanhermitage/videocoreiv-qpu/blob/mast... a couple of weeks ago. And a reference javascript VPU assembler is due very soon (mirroring the path taken with the QPU assembler).

Because our own work was independent and pre-dates this information by a couple of years (in the case of the VPU), we have some alignment work to do in terms of mnemonics.

This release adds a lot of knowledge to the table - extending our understanding beyond the core instruction sets (which were already well reverse engineered) into a deeper understanding of the various hardware registers.


Figured you'd get a kick out of this release. Should be enough for people to figure out how Andrew's FFT code works.


Terrific release, thanks for all that hard work! (Who would have thought! well that crazy dataflow stuff actually works... :)


Okay - that's freaky seeing Herman and Eben converse on the topic... knowing how hard Herman's been working on reverse engineering the VPU/QPU.

Not that I'm saying Eben is trying to hide anything - he just works for Broadcom, a company very concerned with their IP.

Either way, I find the release of this info both surprising and awesome.


Good ol' gzip download, because fuck source control.

edit: FWIW, a friend who works there told me flat-out to just avoid them (I'm job-hunting)


This is almost a universal problem with firmware and chip engineers, who are generally older and more experienced, and thus apparently can't be taught new tricks. There are also issues with bureaucracies in each and every semiconductor company (which are on average decades old) so firmware is usually hot potatoed around by management and ends up lighting the customers' projects on fire almost every damn time. I don't have enough fingers to count how many times I've encountered problems in a project and wasted weeks treading water, eventually to find out that the code examples weren't tested, a parameter that closely effects very sensitive silicon isn't documented, or even that I can't so much as get the documentation for a licensed USB peripheral that is non EHCI compliant (who knew such a thing even existed anymore).

Everyone from Broadcom to Qualcomm (and many more) are guilty of this and it is a very rare treat to find a chip that is well documented with firmware and code examples that work and give you a full picture of what you can do and where the pitfalls are.


Wow, armchair engineer, you sound like you really know what you're talking about.

When this old engineer worked on embedded MIPS cores and 802.11[abgn] at Broadcom, everything was under source code control. This was pre-git, so I think we were using cvs. How the hell do you conclude that releasing a zip file means the engineers don't use source code control?


Those kids on HN think that everything cool was invented last summer by some new college grad, funny thing is most of stuff was already invented in 70's but who would care about prehistory of computing.


Yea. I totally remember Web Browsers, Big Data, and Terabyte hard drives back in the 70's. It's too bad that these things were lost to the ages, only to be re-invented in the present.


New names for old ideas.


Based on that, SpaceX isn't a big deal, because of the Apollo program, right?


Especially because of the Apollo program. SpaceX for all its efficiency, just goes to show how far the present and past few generations have dropped the ball.


well as a many time Broadcom customer I've certainly seen the best and the worst of it - I've had large (1000 file) software releases for Broadcom chips that mutated their directory structure release to release, bug fixes would arrive and revert themselves, then after complaints return - integrating a new software drop into our VCS was always fraught and took weeks longer than we expected - people would fight not to be stuck with doing the next Broadcom integration and eventually we'd say "enough! we'll live with it" and refuse to take any new stuff - at one point we got the FE to admit that there was no one master tree inside Broadcom, just personal trees that were passed around - on the other hand other code from them was well maintained and obviously under good source code control.

I think the way to understand Broadcom is to realise that they are really a whole bunch of smaller startup companies, more are continually being bought and brought into the fold, this messes with their internal company culture, chips may have functional units in them that came from 5 different companies, each with their own coding standards, VCSs, documentation standards etc etc - as a result it's pretty hit and miss and may take a generation or two for a new technology to settle down - I kind of get the impression that there's ongoing friction between the remnants of these companies as they find their way in the new organisation


The fashion of the day is, for better or worse, "if you don't have a github you ain't shit."


git is overkill for facilities with small groups working in a common area (not hard to imagine if you're in the hardware biz) and all the engineers are used to cvs command syntax..


I'm struggling to think of a situation where Git could be overkill for anything... I've personally used it for hardware and software projects where it was just me, where it was used for small teams (3-5 people), and with hundreds of developers doing on a large code base. It worked well on al of them.

The only thing I would not use version control is if you have files that don't change. Everything else should be in some form of VC (sometimes Git is not the best choice, but it's good for lots of things)


Semiconductors, especially VLSI, have a different workflow from most software projects. There are supposed to be as few releases as humanly possibly due to the enormous cost of each release. There are not expected to be a community of developer/designers, each with their own vector of revisions available for development and testing. It is prized to arrive at a single image which is reproduced as many times as possible in practice, rather than in variation, as the lithographic process favors identical reproductions.

In my estimate, Broadcom is on the cusp of reaching a sublime point I've seen in other, unnamed semiconductor companies, at which the number of combinations and revisions of IP within a family of chips becomes difficult to account for.

So, it's really not even a question of git being overkill - it's a question of whether its source code will persist over many times the length of the proejct. Git will be considered for substitution by those deeply committed to CVS simply after a matter of time, once its institutional bugs have been discovered and patched.


Overkill until the network goes down or the server dies or somebody needs to work remotely and the VPN isn't working. I don't know how fast git is wrt cvs, but I remember being pleasantly surprised coming from svn.

You don't have to have a distributed workflow to occasionally reap the benefits of working with git.


The best VCS is the one you use.


There _are_ differences, but yeah - you're arguing about the ranking of the 99.91% - 99.95% better solutions compared to no version control. (Seriously - even CVS and SourceSafe are 99.9% closer to "perfect" than doing nothing)


unless it's CVS, or god forbid, Visual SourceSafe.


On my one man team, the very first command I type on a project after mkdir is git init. Everything is logged and timestamped, and it is far from overkill. Git is easy to learn after working with CVS


A one man team is very different from a 20-man team (which I still think counts as small).

Hey, I like git too. Respecting the needs of others not to spend mental load on learning something they don't want to is often important too.

Is it really worth it to spend time re-training the 15/20 engineers that don't already know git? I don't think so.


Agreed. I write lots of small apps by myself, and I use version control to save me from, well, myself.


Let me guess... BCM43xx? If so, what's your email? This armchair engineer has an eratta shit list a mile long for you. You know, yeahs after the first revision.


I hear you. It's so much worse than you can imagine. By the time the code gets out the door, whether through a dump on the net or even worse, through a channel partner and onto a shelf at Fry's, all the engineers are working on new projects. There is sometimes a six-month lag from the time the code is compiled until it shows up on a store. By then the technology has changed no one gives shit about the old stuff. There is no incentive and certainly no corporate push to go fix the bugs that got shipped. I know I always felt like it would be a simple thing to go patch the code, but it was never a priority for the management.

I'll tell you the really screwy thing about BCM43xx development. We sold reference designs to companies like 3Com and Linksys (before and after acquisition by Cisco), Apple, D-Link, etc. All of them got the same reference design, in theory, but if one of those companies reported a bug in our code, that company would be the only one to get the patched code. As a consequence our code was littered with #ifdef (COMS) or #ifdef (LINKSYS) with specific patches enabled. The other companies, those that hadn't yet discovered the bug, would get updates that we knew were broken. It was terrible for the companies, even worse for the customers of those companies.

Plus there were source releases to those companies and they weren't allowed to know about patches that they didn't receive. So we had a step in our build process called the code transmogrifier. That would go through the code and expand certain C preprocessor symbols so the code no longer contained #ifdef (COMS) or #ifdef (LNIKSYS). That way every company got a different source code release as well. We would do separate compiles of the transmogrified code for each customer.

Then we had all the customization because those companies had fired all their own engineers and instead wanted us to produce a product that was branded "Cisco" or "Linksys". We had to take the exact same code an rebrand copyright strings to make it look like it was written by our customers.

I voted with my feet and left the industry.


> generally older and more experienced, and thus apparently can't be taught new tricks.

Ah yes, experience, the bane of all technical projects.

I have worked at lots of hardware companies over the years, and to varying degrees, they universally treat software as the red headed stepchild -- an unfortunate necessity akin to paying taxes, or worse.


Hahaha! Ha! Haa...

And that's why I no longer work for a semiconductor company.


akislev: I think you paint with too broad a brush - I've worked as a chip designer, embedded systems programmer, kernel hack - it really depends on the team - also most chip teams I've worked with have been relatively young (most in their 20s)- most big silicon projects depend on good version control software because you are doing continual integrations and practice tapeouts through the last half of a project - you may have 2-3 different versions of the code you're working on passing through different portions of the process at any one time

As the guy on the chip team who understood software well (or the guy on the software team who understood the gates) I do think there's a somewhat different issue that has to do with schedules: as a chip designer you get to do maybe 1 month a year of creative design and 11 months meeting timing and making damn sure that silicon will really work the first time - by the time you tape out you're really looking forward to that creative bit and about the time you've up to your armpits in it the silicon comes back and the firmware guys are doing bringup - you don't have a lot of time for the firmware guys because they're getting in the way of the cool part of your job, and besides you are sooo done with that chip you slaved over last year

I don't know what the answer is other than making sure you have firmware running on your logic simulator long before tapeout (you should be doing that anyway)


Ooops yeah I chose the wrong words and painted way too broad. I meant version control as far as the firmware libraries and public facing code. I can't imagine going from FPGA to silicon synthesis is even possible without extremely tight version control of some sort or another (I think they used to use basic transperencies for comparing masks)

I stand corrected as far as the average age of employees but every time I have interfaced with any level of a silicon company the average age distribution has been skewed pretty high.


They also have to live in fear of patent suits. Open-sourcing a modern chipset driver is a pretty ballsy thing to do.


IANAL and I don't have much experience with the legal side of silicon IP but from some cursory conversations with people in the industry, if you're making a chip as advanced as a VideoCore, you're certainly already cross licensing a lot of IP cores. If you license, for example, a EHCI compliant USB core or an MMU or cache pipeline that you get in the form of a VHDL or Verilog file (poor you), that license automatically transfers a right to the patents (as long as the legal chain of contracts were worked on by competent negotiators and lawyers) and rarely do those extend to firmware source code from my understanding.

There could still be problems with the documentation of the interfaces to different cores or whatever, but that's peanuts to a company like Broadcom. I'd guess they already have very liberal agreements for important IP and the stuff you'd sue over is mostly the closed source silicon.


22 year old working in firmware department of major silicon IP company here (you can probably guess who - obviously standard disclaimer about I don't represent them).

My company selling IP rather than hardware makes it a bit easier, but: management is realistic and reasonably versatile. Attitude to proper technologies (e.g. using Github instead of releasing tar bombs over FTP) is good. Documentation is good, bar "don't document that or we will be sued" situations, which are too common. Testing sucks in certain areas but is good in most.

Obviously at my age I don't have much experience, but I thought the alternative view might be nice.


Don't worry, they're going to have to release it again. This zip contains many files with copyright (c) Broadcom (and other companies) missing the 3-clause BSD license.

On an unrelated note, x86 looks like RISC compared to this chip assembly language :)

   vmul  -,H(1,0),3 CLRA ACC
   vadd  HX(3,0),H(0,0),2 ACC
   
   vasr  H(0++,0),HX(2++,0),2 REP 2
   ;vmov  H(0++,0),0 REP 2
   vst  H(0++,0),(r0+=r3) REP 2 
   add   r0,16
   addcmpblt r5,1,r1,loopacross


The vector register file is a block of registers 64 bytes across by 64 down which can be accessed horizontally (H) or vertically (V), and in units of 16 bytes (H8), 16 shorts (H16 or HX), or 16 32-bit words (H32).

Each processor has an accumulator that can be cleared (CLRA) or used (ACC).

This code is basically equivalent to the following C code:

    uint8 in[2][16];
    int16 temp[2][16];
    int32 r1,r3,r5;
    uint8 *r0;
    do {
        ... code omitted
	// vmul  -,H(1,0),3 CLRA ACC
	// vadd  HX(3,0),H(0,0),2 ACC
	for(x=0;x<16;x++) {
	  temp[x]=inb[1][x]*3+in[0][x]+2;
	}
	//vasr  H(0++,0),HX(2++,0),2 REP 2
	//vst  H(0++,0),(r0+=r3) REP 2 
	for(y=0;y<2;y++) {
	  for(x=0;x<16;x++) {
		in[y][x] = temp[y][x]>>2;
		r0[y*r3+x] = in[y][x];
	  }
	}   
	//   add   r0,16
	//   addcmpblt r5,1,r1,loopacross
	r0 += 16;
	r5 += 1;
    } while (r5<r1);
This is code for the vector processor. This is not the same as the GPU cores which use a different architecture and instruction set (based around floating point calculations).


I made a back of the envelop design of a arch that used the same technique in college. Way cool to see it in practice.

This project of yours look really cool, https://github.com/peterderivaz/pymeta3

What do you think about http://www.myhdl.org/doku.php ?


Wow, a selectable row-major or column-major register file! I would have never imagined such a thing. =)


Fancy meeting you here :)


Looks like SSE2 or SSE3 with a different syntax and vector support.


What/where is the assembler thar is used compile this code?


Why would a large company release the commit history of source code when it could open them up to all kinds of lawsuits? It seems perfectly reasonable for them to release it as a zip.


why would the history of the commit open them up to law suits, but not the final product?


A trivial example would be a commit that shows something was implemented and then later removed because of a patent/IP violation.

    1-jan-2003 - bob: added new feature X
    5-jan-2003 - bob: Rev 1.0
    8-oct-2012 - frank: Turns our X was patented in 2001 but we didn't realize it. Replace feature X with something that doesn't violate patent ######


Look gift horses in the mouth much?

Why "avoid" them?


It sounds like ansimionescu might have been told that Broadcom isn't a good place to work.


It is open source. You can put it on Github if you want to.


Generally speaking, that's not what "open source" means. Reading the license agreement is always required to determine if that's actually okay to do. In this case it's okay as long as you reproduce the license in any redistributions (e.g. hosting it on GitHub).


There appears to be a common confusion that Open Source means "source you can see". Instead Open Source is a made-up word which was created to avoid the confusion associated with Free Software, as in the Free Software Foundation.

That is to say: an "open source" license which does not let you redistribute is not an Open Source license at all. Even Microsoft has not tried to mis-use the term instead using "Share Source" for their, thing.


The original article states that the code is licensed under a 3-clause BSD license. The 3-clause BSD license permits redistribution: https://en.wikipedia.org/wiki/BSD_licenses


That's pretty much what I just said. "In this case it's okay as long as you reproduce the license in any redistributions (e.g. hosting it on GitHub)."


So how is that "not what 'open source' means"?


I love raspberrypi!

I created the srcmap index of raspberrypi_userland src tree here:

http://www.srcmap.org/s/sl.htm/p=raspberrypi_userland#c=P&d=...

Hope you guys find it useful.


Proud to have worked on this during my 8-month internship at Broadcom. It remains one of the toughest and rewarding projects to tackle within the company. I am pleasantly surprised to see Broadcom open up like this and release the full documentation. A step in the right direction for a grossly under-rated company.


Well done, mate. This is some great work that you did.


I've never heard of an 8-month internship. What sort of program is that? Like a semester+summer?


Probably a co-op program with a University. the University of Waterloo often does 4-month and/or 8-month (depending on the program); the University of Toronto does 12-month, in partnership with companies that are looking to woo students before they've even finished their undergrad.

(Usually it involves the school designing the academic program and shifting when courses are offered (including putting courses in the spring/summer) to accommodate the work term.)

The grandparent appears to go to the University of British Columbia, judging from their profile.


My university (Montreal ETS) also do it. They have 3 (real/full) semesters per years. Students usually take 1 of them per year to work as interns. You can technically work as intern for like 5 years and still be considered enrolled into the engineering program. Students will either prefer 8 months internship for well paid jobs or jobs that cannot be done in a 4 months development cycle(like a video game). In the end, it does increase the total time spent to graduate, but students tend to have much lower debts (if not profit) at the end.

These kind of deals seem to be quite widespread in Canada (just like the above commenter show). I know these private sector <-> teaching organization deals are not well considered in some part of the world, but, in the end, someone who just graduated, but accumulate years of full time working experience have an easier time landing a job at the end, at least for STEM jobs.


Its a sweet deal for companies if you think about it. Maybe benefits are different in Canada, since I heard that the Govt. provides healthcare, but in the US, the employer has to provide that. Plus, more salary, 401(k) etcetc.

Actually, from another viewpoint, for small companies its very beneficial to hire interns; they can be used to work on projects which might be interesting but have low financial returns.

Which leads me to conclude that > 3 month internships should perhaps be allowed only if you're working at a startup, or small-scale shops that can't really (I mean really really) afford to pay you. For a company like Broadcomm, that doesn't make sense.

Of course, this is just my subjective viewpoint. I'm inclined to believe that my sense of justice might be very skewed.


Interns are being paid in every fields but medicine and education. Some technical schools (including mine) will refuse internships paid anywhere near minimum salary (they care way to much about the statistics and marketing, which is, in turn, a good deal for students too). I am not claiming it is Klondike for every interns (it's not), but having a ~20$/h-30$/h mostly tax free salary is really helpful to avoid loans with interest. Education is not very expensive in Quebec, fees will be around 6k$ / year (9 15 weeks courses + 1x internship fees, books/material not included) but after adding the cost, of life, the rent, food and everything, it get to around 20k$/y. Given the state provided healthcare and services the first quarter* of the citizen have to pay for, it is even financial beneficial to some people.

*(The other three would be elders, children and the ones below poverty threshold)


That's really a surprise. Broadcom is cemented in my mind as one of the hardest companies to get even basic datasheets out of.

I know the saying is "don't look a gift horse in the mouth", but could it be that their graphics cores are not competitive in pure technical capabilities, so they've decided to compete on open-ness? I'm not really involved in that area of hardware/software.


Nope (though note, I'm biased for several reasons). In terms of performance per unit area and performance per Watt, VideoCore IV is about as good as you're going to get. There are chips with higher performance GPUs out there, but they get there by brute force: throwing area (and therefore cost) and/or power at the problem.


This is confirmed by the recent move of AMD towards extremely hot GPUs. You already see that in the most recent 290x, but they're talking about making hardware that is comfortable running at >100C.

I'm not sure how relevant that is with regards to performance per watt, but I do know that heat is one of the most expensive (by-)products of electricity.


That's not really relevant. Broadcom's VideoCore competes against the likes of PowerVR, ARM's Mali, and Qualcomm's Adreno. Those GPUs are all designed for mobile use, and cannot easily scale up to compete with desktop/workstation-class GPUs due to architectural differences and fab process differences.

AMD's chips aren't egregiously power-hungry, they just sell into a market where the product segments are defined by the thermal and electrical limitations of the PCIe expansion card form factor. It will always be the case that the top of the line card draws as much power as can be delivered through the slot and two extra connectors, and as much as can be dissipated through a two-slot cooling system. If AMD can safely operate their chips at a higher temperature, then their cooling system will be a bit more efficient, so their thermal envelope expands a bit and they can increase performance by a few percent.

Early indications are that NVidia's upcoming Maxwell architecture achieves significant efficiency improvements due to being designed with mobile use in mind, but it hasn't yet been demonstrated in a large enough configuration to compare against high-end desktop graphics chips.


I stand corrected.


I just switched my GPU to an nVidia GTX 750 for precisely this reason. I have been near exclusively using Radeons for ten years, but the last few generations just run too hot and loud.


Isn't there some theorem of thermodynamics that says your device gets more efficient as the temperature difference between the hot and cold parts of the device increases?

AFAIK the only reasons devices aren't run at arbitrarily high temperatures is that material properties change unfavorably, or you risk fires.


The efficiency thing is more about heat engines. When we're just dissipating heat, the relevant relationship is that the rate of heat transfer is proportional to the temperature difference. This is why a hot object that isn't producing heat will cool exponentially. In the case of a steady-state source of heat, the temperature difference will be inversely proportional to the thermal conductivity of the connection between hot and cold reservoirs. Heatsinks and fans are used to increase that thermal conductivity so that the temperature difference stays small enough that the absolute temperature is safe. If you can double the tolerable temperature difference, then you can cut your fan speeds by more than half, or even get by with passive convection.

Or, approaching things differently, if you're only going to keep your graphics card for two or three years, there's no reason to let the fans spin up to an audible level until the chip reaches at least 80 degrees C.

Addressing a common misconception: fan speed/noise is not directly related to how much the graphics card is heating up the room by. Whether you run the fans at full tilt or you let the chip almost melt, the heat output will depend only on the workload.


The thermodynamics you are remembering mostly refers to Carnot engines which convert temperature differences to work.


Waiting for the VideoCore V :) I noticed that the Broadcom LTE offerings such as the M320 do not use VideoCore and instead use PowerVR 5XT. Is it because they were originally Renesas designs? Hoping to see more VideoCore action in the future :)


:)


Two years ago (or whenever the BCM2835 itself came out on the market), sure, but surely things have improved since? We've at least jumped a technology node, and hasn't nVidia improved?


Less than you'd think. Other vendors have improved somewhat, but from often from pretty poor starting points. Jumping a process node is a "brute force" approach: we can all do that. I'm more interested in apples-to-apples comparisons on a given node.


Had you worked on it directly, or were working on other parts of the chip while at Broadcom? I've always been hoping to see some hard benchmarks. I had worked on a video codec IP for a competing (but defunct) chip, hence my curiosity :)

I'm curious about the technological history of the VideoCore. The wikipedia page [1] and Broadcom marketing page [2] don't give a lot of information to distinguish it from competing cores (such as ARM's own Mali or nVidia's Tegra line). In fact, the information I see makes it seems pretty run-of-the-mill tech of 5 years ago.

I guess I could RTFM now that it's available :D

Edit/afterthought: none of this really matters actually, as there is no other chip on the market with the BCM2835's capabilities and that is more open-sourced.

[1] http://en.wikipedia.org/wiki/VideoCore [2] http://www.broadcom.com/products/technology/mobmm_videocore....


I did work on it directly. James (now working solely for Pi as HW director), Gordon (now working solely for Pi as SW director) and I (still working for Broadcom, and for Pi) were members of the VideoCore IV design team. James and I were responsible for the QPU (quad processor unit) design and implementation in the V3D block.

There's no one thing about V3D which makes it superior to other cores. Just lots of attention to detail at the design, RTL implementation and layout stages.


Looking at the long term future of the market, Broadcom's move isn't surprising at all. Broadcom (and Qualcomm for that matter) is feeling a lot of pressure from Intel and NVidia, both of whom have a lot of new stuff that are scaring competitors (which is fucking great for us). They're scared not from just a technological or capital/market perspective, but also from a community exposure one. They're both firing on all cylinders right now improving their technology and trying to get as many engineers to use it as possible. While firmware engineers tend to be older, there is a constant cycle and the industry is starting to see the importance of open source, especially as die complexity grows and more and more testing is required for each silicon design.

Intel: 2.5W TDP quad core Atoms with Intel's x86 instruction pipeline and memory management unit have been released as well as a low cost SDR that should be able to dynamically switch between unlicensed, GSM, CDMA, and LTE spectrum. Intel's graphics work is also quickly catching up and will soon start trickling into their low power embedded chips. Intel has also made clear moves with Galileo to open up their technology to the hobbyist engineers.

NVIDIA: The Tegra K1 is out to NVIDIA's partners (supposedly it beats VideoCore and even low end integrated desktop graphics) and they are currently being worked into tablets. NVIDIA also has recently released the i500, their own software defined radio to compete with Intel, and both companies have the resources to end the lock-in that Qualcomm and Broadcom have had in their respective baseband markets. If you're already buying radios from them, it makes sense to get your processor from them too and Broadcom may be trying to hedge against that.

NVIDIA (and maybe Intel) aren't known to be pioneers of open source but they're better than most other Silicon companies and everyone else is really starting to feel the pressure. The problem of supporting many different device configurations is an old one on desktop systems and that problem is only magnified 10x for embedded engineers because of the opacity. With Intel and NVIDIAs expertise on the desktop as they're running full steam ahead into embedded (and kicking ass along the way), they're scary to the old guard.


> NVIDIA (and maybe Intel)

Intel has a huge workforce dedicated to open source. They fund huge chunks of kernel, Wayland, X and Mesa development, and I'm sure they have engineers on projects throughout the foss stack. Yeah, their hardware is pretty systemically evil patent drivel black box nonsense, and their chipsets are some of the most obscure bullshit ever, but everything past the firmware they do a good job on.

Nvidia on the other hand consistently blows whole stack. Their foss Tegra drivers are always, at most, 2d accelerated, and all their recent contributions to Nouveau (all two of them) are just to make SoCs bootable on novueau so you can install their blob driver.

Meanwhile, technologies like CUDA, GSYNC, PHYSX, etc are all proprietary platform locked messes that hurt open tech in general. So I'd definitely rank them way below Intel on a comparison of openness.


Thank you for clarifying. I wrote "maybe Intel" because I've never looked at their reputation for open source.


This is almost certainly driven entirely by Eben Upton, who is the link between Broadcom and Raspberry Pi. He's probably been bundling up the huge list of demands for OSS video drivers and dumping them on the desk of the right person, while wearing down the objectors. It's something that Raspberry Pi have wanted for ages.

Edit: ah, Eben has turned up here himself :)


But we’re incredibly proud that VideoCore IV is the first publicly documented mobile graphics core

Wasn't Intel first by virtue of using their in-house "HD Graphics" hardware in Silvermont (Bay Trail-M and -T)? Regardless, this is a huge thing in the ARM world, and I hope other vendors will begin to open up documentation.


Bay Trail-M means "mobile" as in "laptop", not "mobile" as in "smartphone". Intel's smartphone stuff all still using PowerVR, including the upcoming Merrifield which will be using the PowerVR G64x0.


Good point. Still hard for me to remember that mobile != ARM these days.


>"Earlier today, Broadcom announced the release of full documentation for the VideoCore IV graphics core, and a complete source release of the graphics stack under a 3-clause BSD license. The source release targets the BCM21553 cellphone chip, but it should be reasonably straightforward to port this to the BCM2835, allowing access to the graphics core without using the blob. As an incentive to do this work, we will pay a bounty of $10,000 to the first person to demonstrate to us satisfactorily that they can successfully run Quake III at a playable framerate on Raspberry Pi using these drivers. "

At last!


Just $10,000? Nice gesture, but they can afford to dig a little deeper. Their card running Quake III at reasonable frame rate on a Pi is a big deal, and these cheap skates know it. (I hope the person who figures it out, doesn't give them the code; just show them and collect $10,000 (A years rent in a bad part of Oakland).


(a) They are a charity.

(b) They demonstrated the Pi running Quake 3 at a reasonable frame rate in 2011. There's a picture of it in the blog post.

(c) The competition rules say "To submit an Entry, and Entrant shall email a link to a public GitHub repository containing the full source code for the Entry..." so just showing the result wouldn't be eligible to win. That and they already know what it looks like, because of (b).


They aren't really a charity; they sell a product from a manufacturer (the Broadcom SoC) and were started by Broadcom employees. So, in a way, this "charity" is really just a marketing subsidiary of Broadcom.


Whatever RasPi is, the fact remains that they're selling a relatively open platform at price near production costs. And that platform just got significantly more open.

And they are now offering a bounty for the creation (refinement, really) of more free software. Which will benefit the entire community surrounding the RasPi. So that is good. Even if they were offering $10 USD, there might be someone who'll take up the challenge just to get the bragging rights.


"Relatively open" in the way that your mother was "relatively pregnant" when you were born?


Are you suggesting he's a premie?


No, I'm suggesting that "relatively open" is semantically null.


No, I'm suggesting that "relatively open" is semantically null.

I don't know if you are familiar with the embedded world, but a large percentage of the offerings (specifically system on chip or SoC) have NDA requirements for the documentation.

And many of these platforms don't have a decent Linux port, even though they could support it.

Even with SoC vendors such as TI, the graphics core for the OMAP4 (for example) is a binary blob. There is no documentation for that, and no hope that situation to change (in part because TI has exited the mobile phone market).

Please tell us of your mobile platform experience, and what vendors you consider open or closed.


They are a UK-registered charity organization, and the Raspberry Pi is much more than just the Broadcom SoC.


"The Raspberry Pi Foundation is a UK registered charity (Registration Number 1129409)"


In that case everyone in the industry is a charity case. Almost every invention, from the advent of integrated circuits, digital signal processors, LCD displays, even Google all the way up to the Internet itself — came from publicly funded research projects. In the computer industry in particular the "you didn't build that" claim is probably more true than anywhere else. Shoulders of giants, folks.


>So, in a way, this "charity" is really just a marketing subsidiary of Broadcom.

You could look at it that way if you like. But if you compare the rPi to the comparable low-priced educational development boards available when the project started (none). You might conclude that their marketing effort had a good and very real effect on the availability of low-cost edu dev-kits for both Broadcom SoC products and others' as well (like the BBB). So, as both an educator and Open Source advocate, I am fairly pleased with the rPi project, and even if it isn't exactly what I would have wished for I can't imagine how it could have been more successful or had a better result.


rPi is probably the most aggressively priced, but you talk about it like that market didn't exist. The BeagleBoard is probably the closest, though that was designed more as a dev-board for ARM processors and not a mini-computer like the Pi.


>but you talk about it like that market didn't exist.

For most practical purposes, it didn't. What existed were either inferior to rPi and more expensive (>4x) or approximately equivalent and much more expensive (>8x). It has had a significant positive impact on the way instruction is given.


Q3 is already running on the Pi using the proprietary drivers. This competition is for porting these new open sourced drivers to the Pi and using them to run Q3.


It could be that $10,000 is the amount some mid-to-high level manager can probably write off.

Also $10,000 is a significant amount of money.


On the one hand as someone who owns half a dozen Raspberry Pis this is really cool news and I hate to look a gift horse in the mouth.

On the other hand, ARM vendors really have to start opening everything up much more than they have or Intel is going to start eating their lunch in the small embedded Linux arena, and I suspect Intel's mobile resurgence with Bay Trail is at least partly behind this "gift" to us all.


I don't know anything about mobile, but I've been looking at replacing my htpc ION system (edit: not that it really needs it, frankly it works great) and fanless baytrail boards/cases look very interesting. More cost than a pi + pi codec licensing obviously - but not that much, and all open as you pointed out...


I looked up the documents on it and Bay Trail is surprisingly open for an Intel SoC... although there's probably still plenty of stuff they won't tell you about.


The main touchy-feely moment for me here is that a notoriously closed, $17-billion company was brought around by a friendly, gentle approach to convincing it to be more open. Group hug for Broadcom.


Same could be said of their (albeit paltry) wifi open driver support since 2011. They at least have one now, for very few chipsets. Doesn't work with any of the half dozen Broadcom NICs I have in my "junk parts" bin, though.


This is a step in the right direction. Does this mean that the RaspberryPi can run blob-free now or with one less blob? I don't know what else is needed for the SoC to operate.


Not quite yet. We're hoping to be able to provide a minimum-functionality blobless world fairly soon.


I think this is another example of how Raspberry Pi Foundation's decision not to move forward with a gazillion different boards, but rather to stick with A and B and let the software evolve, was a good one.

I'm very excited to see what the next year brings.


Different development boards (like the Beaglebone Black) are already more open, but get much less attention. The Raspberry Pi has an enormous amount of mindshare, but they've been doing a pretty poor job until now of shifting the status quo of this industry.


Yea with this happening I think what I would love is a bit more cpu power on the pi. Not even the new 64bit arm cores or anything but an A-15 or A-9 at double the MHz of the existing PI would put it in a huge sweetspot for embedded performance and make it far more usable for a desktop (with ideally more ram too but that's always good).


To be fair, the Pi opened the floodgates in terms of ARM boards; there are now loads of them.. not $25 but nothing like the insane pre-pi prices. The beaglebone black from TI is a good step up: 1Ghz A8 (~2x rPi performance), 1GB ram, $45. They don't really need to segment themselves (yet), as others can fill in the gaps.


It sounds like you really want a TI Pandaboard or maybe even one of the Intel SoCs...


Are you able to give any indication what might be included (or not) in such minimum functionality?


The ability to boot a kernel with USB (and thus Ethernet) running. Probably not display in the first instance (we're hella-resource-constrained), but we'd want to add that later on.


From the article -

"This isn’t the end of the road for us: there are still significant parts of the multimedia hardware on BCM2835 which are only accessible via the blob."


Ah, thanks. Poor reading comprehension on my part.


Will running without relying on blobs mean that processing can be further optimised or is it more of a moral victory than a technical progression.

I'm assuming doing away with the shim means that there is some gain but in practical terms is it likely to be quite minimal? Will there be routines accessible that weren't accessible via the shim?


More a moral victory from my PoV. Remember, code that runs in the blob is running on another processor, so not costing you ARM cycles. That's why we've set the Quake fps target much lower than what you can hit with the blob.


Thanks for all the effort to get to this stage Eben. As someone who was repeatedly told (on the RP forums) that I had no business to desire the user-land graphics to be open-sourced, and that only hardware engineers would get any benefit from this code, I thought this day would never come!

I'm happy to be proved wrong, and happy that many little 'RasPis' will have an extended lifespan from this decision by your employer. :-)


Fingers crossed. I'm most excited about the GPGPU stuff people will be able to do now we've documented the instruction set for the QPU. More on this next week.


Well, let's see if someone can get Android to use the GPU now.

And yes, I'm not expecting miracles -- even though Broadcom never released their 4.x port, what they demoed on the-teaser-video-that-was-never-followed-up-on seemed "fast enough" for a number of purposes.


IIRC one of the bigger obstacles to RPi Android support was that the driver libs were linked against glibc. There will be other bumps, but just getting these linked against bionic will be a big help.

I'd love to see Android support, if only for Android's recovery system and read-only system images.


I wonder if anything GPGPU will become of this... I'm thinking it's probably not possible. OpenCL?


One of the bullet points on the broadcom blog post is: "Write general-purpose code leveraging the GPU compute capability on VideoCore devices"


Keen eye (and mind).. Thanks.


I'm building an environment for computer vision, and I recently made a client that can run on the rpi (two minute video if you're interested: http://www.youtube.com/watch?v=qSigkvbpb_Q). I'm not exactly sure what possibilities this opens up, but I'm definitely going to dig into it and find out.


Wow. In due time these guys might start actually releasing datasheets to developers who, you know, need to know the register set and API to like, make their chips work. I don't know why the hell these idiots treat their firmware APIs with Manhattan-project style secrecy, but I would highly advise against designing a new product with a broadcom chipset. They are totally asinine.


Wow. Much open, such surprise.

What next, a full BCM2835 datasheet?

To put in the words of Torvalds: "Hey, this time I'm raising a thumb for Broadcom. Good times."


I don't think you can even start a new product with a broadcom soc if you're not already a huge player.


This is great news. Hopefully this means that ports of android and chromiumOS will be able to utilize this.


> In common with every other ARM-based SoC, using the VideoCore IV 3d graphics core on the Pi requires a block of closed-source binary driver

The Lima driver for ARM Mali is open source and runs Quake 3!


> The Lima driver for ARM Mali is open source and runs Quake 3!

But the Lima driver was created by reverse engineering instead of using publicly available documentation.


Any open source replacement for the Raspberry Pi GPU driver is going to rely pretty heavily on reverse engineering too. The code they've released runs on the VPU, a custom-designed scalar processor within the VideoCore IV that Broadcom still haven't released architecture documentation or a toolchain for. Without a whole bunch of low-level reverse engineering (much of which was already in progress) it won't even be possible to compile the stuff Broadcom have released. Without working VPU code you can't even boot the Pi, let alone make use of graphics.


That's not correct. The code that's been released is for BCM21553, which doesn't have a VPU. There are bits and pieces of VPU assembler in there, but they're unused on the ARM target. The intention is that people port this (ARM-based) drop to run on the ARM on BCM2835. No reverse engineering required.


Ah, that makes more sense, thanks! There's an awful lot of vestigial VPU-related stuff lying around the source tree, including what I'm guessing are the unused remnants of a dead build system...


They donate to WebKit? I thought that was fully funded by apple.


Yes. See here:

http://www.raspberrypi.org/archives/5535

a lot of this work goes upstream.


There's a significant WebKit consultancy industry.


Google funds WebKit as well.


No, they did fund large parts of it, but forked quite a while ago.


They were actually the biggest contributor, above Apple, for a while before the fork.


This is awesome news. I hope other vendors follow suit.


Quick noob question...is the Pi still the best/most flexible option for general hardware hacking? What about ARM dev boards from Samsung, Qualcomm etc?


'Best' can measured in a few ways, the RPi has a huge community, and it's certainly a bargain (though you will end up buying a bunch of odds and ends to get going). The Beaglebone Black is probably the other major contender, way more GPIO pins, can boot from the onboard memory, is ARMv7, for $10 more.


BeagleBone Black has just one USB port, which is often very inconvenient. For example, if you want to run Octoprint accessible via WiFi and control a 3d printer, two USB ports will be needed. Certainly, a USB hub is an option, but it makes the setup more brittle (some USB WiFi dongles won't work over some hubs) and more expensive.


This is genuinely great news & a vast improvement over the previous graphics driver!

Hopefully this means we'll see proper DRM / Gallium support for the IV & I'm looking forward to seeing what people can come up with now that they have full access to the GPU.

Congrats to Eben & the team - you've done good.


This is incredible. Thank you Broadcom!


I remember how satisfying it was to get that demo to work back in 2011 (thanks to the help of Dave Emett). Feels like an incredibly long time ago now! I wonder what framerate people will be able to achieve...


It's great news. It's also not quite enough to get rid of the entire binary blob, as the article notes near the bottom. (I have the feeling some didn't read that far.)


Oh woah - I'm very happy about the BSD license. This means people can more easily potentially spin up business around this platform. I wonder why they chose it...

Does anyone have any insight?


BSD-like licenses seem to be the standard for userland graphics libraries. There's a lot of useful stuff in there (for example the shader compiler) that may be of general use elsewhere, and we wanted to make sure people could use it.

The 3-clause BSD is a compromise - we're happy for the stuff to be used but it's nice to get some credit :)


Thank you for the clarification. It seems overly generous =)

Often code is GPL'd with pricey licenses for commercial applications - which always makes me think twice about investing my time into familiarizing with it.

I'm very impressed they aren't trying to make a quick buck here. It sounds like they have a very innovative long term strategy that will pay off many times over.

I think your competition is also fantastic, as it will encourage people to open source their improvements.

Thank you for all your hard work Dr.Upton


You're most welcome. Good times.


My favourite part of this announcement is:

* Okay, so sue us: we launched on February 29. Please don’t sue us.


This has been my biggest disappointment about the raspi since day one. If true, this is huge news indeed.


Would be awesome if we soon see OpenCl on the RPi. Now I've got to go find mine!


lol i lost it...

> Okay, so sue us: we launched on February 29.

> Please don’t sue us.


Celebrate the open source release of the drivers by giving out a price for making a non-free software run on it? Also one that the intended audience of the hardware is not allowed to play? Seems a bit odd.

(The Quake 3 engine is open source and free nowadays, but the assets and maps are not, so you cannot run "Quake 3" without using the demo. So "Quake 3" is non-free.)


Erm, I'm pretty sure you can use a shareware version (!) of the Q3 assets. Or any of a number of other things.

The idTech2 and idTech3 engines are standbys of the open sores community--it makes sense to use them for testing graphics drivers. You are far more likely to encounter a bug in your driver than in those engines.


Even if free as beer, the demo version is non-free, in an open source sense.

Sure, but you could use any of the many projects built on the open Q3 engine that are actually free as in speech. (And maybe child-friendly)

Also, to be clear: I find it odd, but still a fun endeavor.


Who, other than certain parents with opinions they can't scientifically justify, says anyone isn't "allowed to play" Quake 3?


In Germany, giving Quake 3 into the hands of minors is an offence, for starters.

So, in our case, these nice people: https://en.wikipedia.org/wiki/Federal_Department_for_Media_H...

Like it or not.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: