Also, Bunnie is a very cool guy, his blog is a wealth of information on hardware design, and Chinese manufacturing process. If anyone can make a laptop that has every component open source, it's him. might take time, but hey.
It's actually a laptop for FPGA hackers with incremental steps in the direction of an open-source laptop. That's great. That they made FPGA development a bit easier plus made a more open laptop is extra great. So, let's give them props they deserve while acknowledging it's still untrustworthy hardware and there's plenty of ways to go.
Fully open hardware is never going to be quite as achievable as open source software because the manufacturing and distribution costs cannot be driven to zero in the same way. Furthermore, ASICs can't be rebuilt by the end user in the same way as software. You're always going to be dependent on a device coming out of someone else's factory.
Openness remains a minority interest, and therefore the NRE cost amortized in every unit is going to be higher. Signs of people being willing to pay for open hardware (the original designers who did the development, not the cloners who didn't) are scarce.
And when someone does offer something like this to the community, what do they get? Negativity.
It's not even an unachievable amount of money that would be required to do a fully open CPU+GPU by paid professionals. $10m at the bottom end. Much smaller than, say, the Star Citizen kickstarter. What's unachievable is the community unity of purpose behind it.
I don't think it is easy to achieve verifiability to be above nitpicking of the kind we're seeing here. Assume we start with open RTL in some HDL. You then have to run it through layout and get someone with a Formality license to check it's equivalent (trusting that particular piece of software). You then take your GDSII to a fab. If you want to check that they've not inserted something, you have to resort to sampling production and sending it to the specialists with microscopes to check it - but then you have to trust their reports. And none of this is provable to doubting strangers on the internet. It's always possible for someone to assert that the whole group of people responsible is a clever fake by the CIA. You can't check it on your desk the same way you might be able to check software with reproducible builds (not that more than a few people do this anyway).
Everything you just said applies to software as well because you have to trust what your machine shows you. And it might be subverted.
"I don't think it is easy to achieve verifiability to be above nitpicking of the kind we're seeing here."
The most basic verifiability for open HW is to use components that have documented, open HDL with rest of what was produced open for 3rd party verification. Gaisler already did it for a CPU and whole IP library. Rocket RISC-V team did their part. Magic-1 guy did it with wire-wrap on TTL chips. None of these people had $10+ million. Gaisler shows it was just a matter of getting money together to do the development cost-effectively, licensing to commercial users to sustain it, ensuring any I.P. could be re-synthesized for new use cases, and simply offering source for re-synthesis or just verification.
Now, we can go much further than that and tools like Qflow OSS Synthesis are a step in that direction. However, most likely approach given EDA complexity is doing it with industry standard tools with third parties checking those results. Might even do it totally in high-end tools but in a way that end result can be verified to large degree with open tools. Many ways to do that but all need work. Good news is that there's no Snowden leaks, etc suggesting EDA tools are subverted. That helps. :)
So, you're creating one strawman after another to justify either (a) not trying or (b) calling something open that's not. The minimal requirement has been done repeatedly in other projects by other people. It takes no extra money on top of a normal HW development. It can even be done both proprietary and open source as Gaisler illustrated for some of his stuff. The result merely has to be open and done in a way that 3rd parties can verify. Optionally, that they can integrate and put on fab of choosing if they desire.
That's minimum for open-source hardware. A project does the minimum, I call it open-source hardware. If it is not doing it, I'm calling it something else. I'm calling a project that heavily depends on black boxes from DOD-connected and scheming companies as opposite of open in operation as is possible. If the board is open, I'll call it an open-board or open-PCB design. Novena seems to be an open-board system with OSS software. It's not open HW.
"And when someone does offer something like this to the community, what do they get? Negativity."
Irrelevant: nobody suggested perfection. Matter of fact, all I suggested was for the project to deliver on its stated goal or articles on it modify their statements to reflect reality. Many projects accurately portray their capabilities without financial incentive at all. So, I'm sure this one can, too.
"Fully open hardware is never going to be quite as achievable as open source software because the manufacturing and distribution costs cannot be driven to zero in the same way. "
Fully open hardware has been achieved many times. It just hasn't been achieved at the desktop or laptop level. QED.
"Openness remains a minority interest, and therefore the NRE cost amortized in every unit is going to be higher. Signs of people being willing to pay for open hardware (the original designers who did the development, not the cloners who didn't) are scarce."
Now you have a real problem. You have to get the costs down or just drop the goal entirely.
"It's not even an unachievable amount of money that would be required to do a fully open CPU+GPU by paid professionals. $10m at the bottom end. Much smaller than, say, the Star Citizen kickstarter. What's unachievable is the community unity of purpose behind it."
It's actually easier than you think. All it would take are a number of academic teams, probably less than 10, using their discounted tools and grant money to each produce re-usable parts of a SOC on good process node (eg 45nm CMOS). Think USB, DDR, cache, etc. They've already produced a processor (Rocket RISC-V @ 1.4GHz) and many DSP-style chips up to 160+ cores that can be licensed. Remaining pieces are small plus SOC integration and testing with tooling.
I explained all this here:
Using academics for as many projects as possible keeps the EDA, MPW, and labor costs low. They can just public domain everything they build or cheaply license it to commercial projects to fund further development. In it for them is job/tool experience. In just a few years, we'd have every major component we need. Then, a private company (or another academic team) could put it together as SOC leveraging tools they already bought with funding from their own operations, Kickstarter, grant money, private donors, whatever. Many funding models.
So, far from an impossibility, there's multiple paths to get to the SOC's, DSP's, or FPGA's needed for open-source HW. A good chunk of it is already built, some open-sourced, and some just described in papers with enough detail to copy. There's teams all over the world with skills, tools, and funding that can be motivated to do it. So, enough excuses: start cold calling on them. :)
And things like Novena can present themselves accurately in meantime as huge, black-boxes from scheming companies that are connected together in an open-source way. Seeing it that way, though, might make people concerned with their security less likely to buy it. Re-purposed embedded boards are probably a lot safer. :)
My argument is that you (as a customer buying 1x device) can't do your own verification of the device that you buy. For example the Geisler processor ( http://www.gaisler.com/index.php/products/processors/leon3 ) looks genuinely interesting - but it doesn't seem to be a commodity part that you can actually buy off, say, Digikey. If I wanted to sow doubt on the origins of it I'd point out that the parent company Cobham PLC is a defence contractor. We're back to "scheming companies".
At some point you, the end user, are going to get a small black epoxy chip package which you have been told contains a particular chip design. You can't modify or realistically inspect that object. While for software you can at least attempt your own build, Ken Thompson attacks notwithstanding.
That's true. It's why you have to do a probabilistic approach with open HW whose behavior can be compared to spec in field. Additionally, you can make sure I/O is easy to inspect so a subset of users monitoring for suspicious behavior can catch it. It's certainly going to be a black box but easier to catch problems if open HW. Also, easier to avoid them if you retarget it with obfuscations and mask/fab vendor of your choosing. Much more difficult problem for your enemies than a total black box(s) from one source and set of vendors who are all purely profit-motivated (read: subject to bribes or contract coercion).
Note: Gaisler's stuff predates Cobham IIRC, mainly funded by grant money for embedded and space applications, etc. Plus licensing what he built. He's also gives synthesizable specs and code to his customers with OSS software. Risk level of Gaisler doesn't begin to compare with Freescale, Xilinx, etc. Even post-Cobham because you can inspect the product's source and even strip what you don't trust.
Additionally, your easily inspected software runs on a small, black, epoxy chip. You don't know what that software is doing any more than you know what the chip is doing. Especially if either gets hacked. My point is your arguments counter FOSS software as easily as FOSS hardware on verification side with same results as my hardware security approach. Yet, you still go for FOSS software because you think that level of assurance and verification is worthwhile along with future potential it provides. I'm making same argument for HW while citing other companies and projects that do in practice what I'm encouraging to justify it as practical. The full benefits will take time to accumulate. That's fine as long as we're on that path. :)
The tool chain for the FPGA is proprietary and a black box. There are some reverse engineering efforts, but hard to do any meaningful. By and large, open HW is in a fairly sorry state. Most users seem to think zero cost is good enough.
Open-source FPGA architecture at 45nm
Open-source bitstream generation for Xilinx w/out EULA violation or R.E. (!?)
Open tools for Lattice
Open-source HW synthesis flow
Note: Cliff is on a winning streak, eh?
I recently did a write-up showing what it could or would take at both ASIC and production levels here:
So, paths are clear and plenty of potential. Just no uptake. Will take government, corporate, or private sponsors working with academic (low NRE) and professional (experienced) HW designers that open-source stuff as they go. Proprietary, dual-licensed OSS is the only way to go for HW that I know of.
Thanks for the links. I knew about the efforts to generate for Lattice but haven't kept up. The big issue is really good P&R with working optimizer for anything bigger than toy designs and chips. It seems Cliff has done some real progress yes.
And it is interesting that it is basically easier to go open with ASIC than FPGA. The problem is getting your chip manufactured.
What would you say would be the best path for Cryptech to try and go open with the flow? Alternatively, to mitigate black boxes in the FPGA of commercial chips?
We currently fill the Spartan6lx45 on the Novena to max. We target the biggest Artix7 you can design for with zero cost versions of Xilinx tools for our Alpha board.
I've toyed with the idea of mapping a synthetic FPGA into a commercial FPGA and then implement the real use case core in the synthetic FPGA. This would allow greater control on how the design is implemented. But then you need a good optimizer. And the cost to implement a gate will require even more transistors than the FPGA already do. Mapping the synthetic FPGA to the underlying tech can be done pretty efficiently, but configuration support is costly.
Yep. The only one I know about is VPR:
Another angle on the topic listing specific tools and issues:
You might want to test to see if Qflow/Yosys and/or VBR can handle your designs. Need to know where they're at in terms of usability and performance to identify what improvements remain to be done. Also, crypto algorithms are usually small enough to fit on the Lattice FPGA's. So, try Cliff's IceStorm synthesis flow for them. Would be great if that worked given their take-up in embedded and how many algorithms could be put on them.
"What would you say would be the best path for Cryptech to try and go open with the flow? Alternatively, to mitigate black boxes in the FPGA of commercial chips?"
"I've toyed with the idea of mapping a synthetic FPGA into a commercial FPGA and then implement the real use case core in the synthetic FPGA."
That was the recommendation I wrote last night in another comment haha. To be honest, given poor demand, I'd actually recommend you do nothing for open hardware. :( If you're ideological or brave, there's basically three paths you can go down: cheap ASIC w/ MPW's over time; S-ASIC; FPGA clone. Let's look at them in reverse.
Cloning Xilinx/Altera FPGA's was one of my first ideas, too. The concept was that their HLS or place-n-route tools can get it going in an efficient way. Then, OSS tools can suck the bitstream out and re-apply it to my comparable device. A combination of QFlow w/ VPR could be used for open alternative directly to my device.
The biggest risk here, other than NRE costs and tooling risk, is lawsuits from Xilinx or Altera esp over patents. They could make an ASIC look cheap. Still, maybe cheaper per unit if a royalty was negotiated given outrageous unit prices of commercial FPGA's. And you have a box you can trust and use for other products, even license to third parties for onboard FPGA logic in tehir SOC's. Could go quite a long time without a lawsuit while keeping the FPGA part a NDA secret.
Also, worth considering the security advantage of controlling how I.P. got onto the FPGA to prevent external attacks or using anti-fuse variant of Xilinx/Altera to maintain integrity. Software attacks are the more practical concern. I'm not convinced Big Two do enough on that. So, having trusted loading or secure interfaces to FPGA could be justification enough to design one given your market.
Alternatively, fund more academic development of something like Archipelago w/ VPR etc to target it. Keeps anyone from claiming you're using their stuff outside of occasional patent suits which happen anyway. Can also keep the complexity and cost at levels you like rather than what Big Two think is marketable.
The next option, Structured ASIC's, are quite under-utilized. They're basically FPGA's where the routing is done with a fixed layer in ASIC that vendor generates during conversion. That it's only one layer makes it way cheaper than ASIC of same size. That the wires only connect to blocks you use means it goes faster and uses less power. That it's an ASIC, not FPGA, means lower unit prices. Vendors that do this include eASIC's Nextreme and Triad Semi's VCA (their analog blocks can be TRNG's btw). I know eASIC does maskless prototyping for low cost plus has lots of 3rd party I.P. on theirs already. All I'm saying for S-ASIC is get some estimates for specific NRE and volume as jury is still out on whether they're a good idea. Also, make sure your FPGA HW is designed for easy conversion to ASIC up-front as it's not as easy as they promise. ;)
The last option is Standard Cell ASIC. The cheap way of doing this is to figure out the absolute least amount of features (esp I/O interfaces) you need to get the job done on the oldest process node that can handle them. Then, just gradually move I.P. onto it piece by piece with multi-project wafer runs to keep costs in check. The 180nm and 350nm nodes are popular right now because masks are cheap due to fully-depreciated equipment and most bugs having been worked out. I believe the fabs will even hold your mask for you so you can keep making more chips from it over time. Curious what that costs. Anyway, Tekmos has MPW offerings at from 350nm-65nm with a 2.5D option that combines 180nm/350nm logic with 90nm flash memory. Never used them so unsure of cost/reliability.
In any case, your options all cost significant money if you want to hold a chip in your hand that you built. The costs, whether Standard Cell or S-ASIC, are lower than they've ever been. Homebrew FPGA on a node like 45nm (90nm minimum) with academics and their cheap tools doing the development seems ideal as it will keep paying off. Can always use a 3rd party tool for synthesis if OSS doesn't cut it. Otherwise, can put your crypto on ASIC piecemeal or see if eASIC, etc can do it cheap. And safest of all: don't do shit about the problem and let your customers worry about it until they pony up the cash to let you solve it for them.
Wish I had specific recommendations that cost you a new car instead of a house or two. But this is HW development after all. Hope some of this helps in your own explorations on FPGA's or developing your offerings. :)
If this is using an i.MX6, one great way to interface the FPGA with the CPU is through the EIM (external interface memory). It allows you a quite fast interface, without giving the FPGA direct access to RAM. It might not be fast enough to connect a 100MSPS ADC, but for many things it's quite great and easy to write both the Verilog/VHDL and the kernel-side driver for.
If anybody wants to build FPGA designs for the Novena I would recommend to start with our top level design and build setup, just clone and go.
We get good performance out of the FPGA as configurable coprocessor.
Synflow  is interesting because it's a step up from Verilog/VHDL w/ open-source tooling. Their strategy sounds smarter and easier than many where it seems to try to augment the human brain rather than replace it. That it outputs synthesizable Verilog/VHDL is nice. Downside is that one of the founders wrote an article  expressing that HLS is dead and they transitioned to an internet-of-things company instead. Upside is they probably keep supporting this, anything OSS can be picked up by another team, and he was on HN very happy to answer any questions. I just lack hands-on experience to properly put Cflow through paces and ask him the right questions, you know.
What got me interested, other than Cflow's design, was that they were offering I.P. blocks for around $500. I never see I.P. for three digits. So, I'm hoping it's because they're onto something good. :)
The other to check out is Synthagate. Syntagate's web site  details their methodology using Abstract State Machines with two different types for control and data. That's interesting and it's probably expensive. More interesting was Baranov's book  that details his exact methodology for doing HLS. Correct me if I'm wrong, but full details of a HLS process good enough to synthesize microcontrollers rarely leaves trade secret stage. The real value here, other than possibly good HLS tool, is a specific method of doing HLS described in enough detail to clone with OSS tools.
Just need someone with HW design expertise, esp FPGA's, to evaluate what's in the book to tell me if it could be used for algorithms, protocol engines, etc. Does Baranov's method look like it can work for real-world use cases and does his tool do the job for a set of I.P. representative of what people will really use? The former you can just read but the latter might be available with free trial or something.
I'm thinking it is his secret sauce because he (a) sells it and (b) seems to have pulled it off his web site. Goofball forgot about Wayback Machine. Enjoy. :)
Main objective is to find a tool flexible enough to handle about arbitrary algorithms and dramatically lower the bar for people/companies wanting to do FPGA acceleration. The less hardware design they need to know the better. That takes HLS. Worst case, the design happens faster and cheaper using fewer hardware developers possibly with less experience.
I found the wording about XFce to be weird: They claimed to not use Ubuntu because they needed XFce, but ...
I went with Debian because we knew some Debian people. At one point I considered moving to Ubuntu, because they package "Firefox" and "Thunderbird" (instead of rebranding them) and they have Chromium as well. There were lots of teething problems, though, ranging from their Xorg modesetting driver refusing to use generic KMS on a non-PCI system, to strange oddities when no initrd was present, to the desktop straight-up crashing when logging in (turns out Unity assumes GL is present and has no fallback.) Ubuntu seems to make many assumptions about running on standard x86 hardware, and was rough around the edges. We could have polished it, but it seemed easier to stick with Debian.
The CPU cores runs at 1.2 GHz, for reliable transfer you want to run the EIM interface at 66 MHz. You could potentially run the GPU at something like 100 MHz in the Spartan6.
That is a 10x difference compared to the CPU. So your GPU must provide more than 10x compared to SW to be useful. Not taking parallel processing into account though.
I could see a compromise where they used a CPU w/ onboard GPU or multimedia acceleration, though. Not sure if it will meet their cost or NDA-free datasheet requirements though.
I'm confused. They claim the open source requirement prevented them from going with an intel chip, but then they say the reason why was because intel could push firmware updates. Furthermore, the Freescale SoC has firmware built in, but it's not open source.
Is this what we consider open source these days? Sure, a good whitepaper is nice, but it's not open source. Don't get me wrong, this is still nice effort on the open front, but calling this a "laptop with no secrets" seems like a stretch.
Now it could be that there is some hardcoded exception handler in the memory controller that jumps to a specific area of ROM when a certain instruction is hit, and that causes exfiltration of data. But to do that would require a lot of separate components interacting, including ones that Freescale doesn't have access to (i.e. the A9 RTL, and possibly the PL-310 and the memory controller) and would require Freescale put the data exfiltration routine in ROM, or maybe there's an exfiltration routine hardcoded into the PL-310, in which case I wonder how they actually get data out of the machine.
As you say, that firmware can't be changed. It can be read out and analyzed for possible security problems, and I hope someone does that. The code is mostly concerned with validating signed boot images for "secure boot" where manufacturers don't want third-party firmwares to be used. It checks the firmware signature / decrypts the firmware using a key burned into OTP fuse bits. Novena doesn't use fuse bits, and in fact doesn't even blow the "boot source" fuses, meaning you're free to change the boot source from internal SD to external SD to SSD to booting from USB.
You can actually blow the fuses yourself and load a key known only to you, which means only you can sign firmware that it'll boot.
Given that it's a ROM and not reprogrammable, and not used after it transfers control to the user bootloader, I think it's fair to treat it more as part of the silicon than as a piece of software. It might be worth auditing for bugs in the boot assurance crypto though.
(If the hardware is malicious, it would be far simpler and less detectable to do it as a silent peripheral, possibly even a whole other processor, than in the boot ROM)
It is a dense and very detailed text, but basically your Intel CPU contains Active Management Technology (AMT) which lets remote users control your computer, which may or may not be what you want, and there might be backdoors hiding here.
It also includes Intel Boot Guard, which prevents users from installing their own firmware (such as libreboot and coreboot) because it needs to be signed with a key from Intel.
The page sums it up like this:
> In summary, the Intel Management Engine and its applications are a backdoor with total access to and control over the rest of the PC. The ME is a threat to freedom, security, and privacy, and the libreboot project strongly recommends avoiding it entirely. Since recent versions of it can't be removed, this means avoiding all recent generations of Intel hardware.
I've got an X1 carbon where the vPro/AMT/ME setup screen doesn't load (CTRL+P on boot does nothing, missing UEFI binary?). Can I still activate it in Linux for my own purposes?
The SoC has closed-source blobs, but it's all debugging and reversible which is not as good as OSS, but not as bad as opaque blobs from Intel.
ps: I shouldn't have written ARM, I meant simple ARM-like ISA (from what I understand it's way smaller than x86).
You can also try Texas Instruments chips too but I think the initial bootup starts from a ROM.
The official website above has a lot of info, but there is a nicer intro @ lwn: https://lwn.net/Articles/647636/
Unfortunately, it would be quite difficult for outside organizations to replicate these measurements unless they can pay TSMC for a fab run.
An ISA can't be "fast" or "slow". It's just a specification. There's no reason you can't build a RISC-V core that's just as fast as an ARM or x86 core. The only reason we haven't done so is because we don't have access to the modern fabrication technologies that Intel and commercial ARM licensees use.
Let's assume that application-targeted instructions can reduce the size of the inner loops in these applications by 2x. Even the RISCiest cores do not, in practice, run at 2x the clock speed or 2x the issue rate of cores with application-targeted instructions. Thus, ISAs with baked in support for these use-case accelerating instructions will be more performant.
The RISCy core probably wins on mm^2, but a perf/mm^2 analysis will be highly dependent on how well designed the application-specific instructions are for area conservation.
Personally I'd love for some foundation to just buy up Motorola and open up everything, but I'm guessing a) they depend on IP they can't open up, and b) I guess Google kept all that, and the Motorola that Lenovo now owns(?) is just a brand?
Both FSL and NXP are large ARM licensees so yeah, ain't gonna happen.
But there's a completely open ARMv2 clone: http://opencores.org/project,amber
Clones are even worse out. TurboSilicon was attacked so hard they were thankful to hand all their assets to ARM.
Genuinely curious ... in a previous HN discussion:
ARM chips with special "9th cores" were discussed:
Do ARM based Freescale chips, like the i.MX6, have "features" like this ?
Did they never read any of Snowden's leaks? :(
It's disappointing that if they are "aware" of the problem, they didn't mention it to their audience in an article about their device's security, don't you think?
I'd argue a really good place to start from is a custom Rockchip machine (like Asus C201, which is now supported by Libreboot). Add a free GPU (or finish the Lima driver) and we are ready to go.
The other was that it was difficult to get past airport security. The German agents seemed to at least appreciate that we'd built a laptop that we can trust.
Finally, it was nigh impossible to access things like the FPGA and the serial ports. I find myself using GPIOs all the time to do things like bitbang SWD, and being able to do that by reaching around the screen is handy, not to mention being able to mount the thing being debugged under SWD directly in the case, so it's much easier to take with me.
The true start will be once the lowrisc SoC is out, and open hardware devices using it start to flood the market.
Might be worth laser-cutting a case out of nice wood for one though. Hmm.
Preferably there would be something like the ITX standard for laptops with different standard designs. Like a ultra book, notebook and different screen sizes. If there was standardized screen you could easily repair your laptop.
All hardware should be open.
The firmware should be open with known checksums.
Open boot loader.
Open source operating system.
All components should be upgradable so that once the laptop speed wise is obsolete by moores law it should be easy to dissamble and recyle the materials.
Further you should be able to reuse components which does not age that fast as the cpu and mainboard, for example the keyboard, mouse and display.
We throw a lot of electronic garbage, that ends up being exported to cheap scrap yards in China and Africa, this needs to change if we do not want a junk yard planet.
The other possibility would be to fab a chip ourselves, but that's a whole other order of magnitude in terms of cost and complexity, and the result isn't that great in terms of speed and available peripherals. Plus, when you fab a chip like this a lot of the hardware blocks are IP provided by the chip foundry, e.g. flash controllers and DRAM cells, and those are always closed-source. It just moves the whole thing one turtle down.
1:for a reasonable amount of money, paying for a custom ASIC is quite expensive.
Very cool project :)
2. C'est trop cher!
3. Is it a manually-capsuled desktop-style laptop?
4. Is it light enough to carry?
5. Where can I get one?
Apart from that, for the layman "Linux" is generally understood as the whole Linux distribution, not just the kernel or the OS.
Still, even from that perspective this wording is quite confusing.
Still, I was happy to see the comparison even if it was slightly misleading.
They would? Thinkpad margins aren't that great to begin with (which is why IBM sold the business off to Lenovo). So now take out every high-volume, low cost component in there and replace it with something less popular. Because you no longer have those economies of scale you don't have the best fabs so the thermal profile of everything in there sucks, so you have to find some way to put the thing in a lap without causing permanent damage. And then you're marketing the thing to the users of the third-most-popular desktop OS, where the breakdown of market share is something like 90/8/2. But then not to users of the most popular desktop Linux distro, or even the top 10 most popular Linux distros, to the people who use stuff like Gnusense.
Where's the metric crapton of money?
The author of libreboot, and owner of Minifree (former Gluglug) said that he would complete all current orders, so he would mark them as out-of-stock.
You can buy a X200 and free your laptop yourself if you dare :) You can swing by #libreboot on Freenode if you have any interest in the project.
I've been considering trying libreboot on my T420s - but as I use it almost every day, it's a bit high risk in case I brick it.
First line of the description
It's a small portable computer that is probably underpowered for everyday general-purpose use.
I've been torturing myself over the idea of having a single banner ad on my site (decided not to for now), but this is something that makes me want to finally give up and install an adblocker.
Free hardware is completely different. You have to rely on other organizations all the way down the chain. And even if you manage to build a tool set to build something, it's not like anyone else can simply copy it and get started. They end up having almost the same struggle that you did.
Consider as well how long it took for the free software view point to become acceptable to mainstream thinking. Even today, almost 2016, there are huge populations of programmers who think free software is crazy. But now there are companies built on free software with near billion dollar valuations, and other companies mostly built with free software that are in the many-billions-of-dollars of valuation. The concept has gotten over the hump, though it was a huge struggle for decades (and it continues to be necessary to make sure that legislation doesn't take our freedom away).
Software was crazily hard. Hardware is nigh on impossible. And yet, they have accomplished this! It's not sad, it's amazing!
I understand that you wish that the world was made up in a way that would make free hardware easy, but it's not. It never will be. It will take the concerted effort of dedicated pioneers to drag it into the consciousness of the masses. I suppose it is sad that it isn't easier in the same way that it is sad that it doesn't occasionally rain candy instead of raindrops. But that is no reason to be disappointed.
I'd love to work for something like this. I might not even need money for it. But I would not do it just to "sleek up this box", I would want to take part in the design process. (Or some money. I can do boring stuff with salary.)
1. These people probably don't know anybody like me.
2. Nobody wants to give away decision power when it's about their pet project.