Hacker News new | past | comments | ask | show | jobs | submit login
Novena: A Laptop with No Secrets (ieee.org)
446 points by zmanian on Oct 29, 2015 | hide | past | favorite | 109 comments



I've been following the Novena since it's conception, it's a neat idea, and even though it's not 100% open source, it's closer than everything else we have. And the fact that you can just build hardware ontop of what's already there due to the FPGA and massive IO bus is downright neat. They've already demonstrated stuff like an oscilloscope.

Also, Bunnie is a very cool guy, his blog is a wealth of information on hardware design, and Chinese manufacturing process. If anyone can make a laptop that has every component open source, it's him. might take time, but hey.


As someone who bought Bunnie's Hacking The Xbox book many moons ago, it's been enjoyable to follow along project after project.


Good to know that he's working on something new post Chumbie


It's not a laptop with no secrets or fully open-sourced. There's the hardware components internal operation, the FPGA tools, and so on that need to be open. Incidentally, the things with the biggest risk, too. Freescale operates DOD-certified fabs in U.S. on top of it with major sales to U.S. govt. Whatever subversion risk Intel poses we must assume Freescale does as well.

It's actually a laptop for FPGA hackers with incremental steps in the direction of an open-source laptop. That's great. That they made FPGA development a bit easier plus made a more open laptop is extra great. So, let's give them props they deserve while acknowledging it's still untrustworthy hardware and there's plenty of ways to go.


This whole thread should be cited on https://en.wikipedia.org/wiki/Perfect_is_the_enemy_of_good

Fully open hardware is never going to be quite as achievable as open source software because the manufacturing and distribution costs cannot be driven to zero in the same way. Furthermore, ASICs can't be rebuilt by the end user in the same way as software. You're always going to be dependent on a device coming out of someone else's factory.

Openness remains a minority interest, and therefore the NRE cost amortized in every unit is going to be higher. Signs of people being willing to pay for open hardware (the original designers who did the development, not the cloners who didn't) are scarce.

And when someone does offer something like this to the community, what do they get? Negativity.

It's not even an unachievable amount of money that would be required to do a fully open CPU+GPU by paid professionals. $10m at the bottom end. Much smaller than, say, the Star Citizen kickstarter. What's unachievable is the community unity of purpose behind it.


It's not negativity; it's a practical analysis of the fact that the project failed to achieve its singular purpose. He even acknowledges the effort and that it's a good incremental step; but a verifiable computer it is not.


Verifiable is not the same as the stated goal of "as open as possible at a reasonable price using obtainable components". The latter is somewhat flexible, the former is extremely hard to achieve even for milspec gear.


It's extremely easy to achieve. It just costs money and takes effort.


"Going to the moon is easy to achieve, it just costs money and takes effort."

I don't think it is easy to achieve verifiability to be above nitpicking of the kind we're seeing here. Assume we start with open RTL in some HDL. You then have to run it through layout and get someone with a Formality license to check it's equivalent (trusting that particular piece of software). You then take your GDSII to a fab. If you want to check that they've not inserted something, you have to resort to sampling production and sending it to the specialists with microscopes to check it - but then you have to trust their reports. And none of this is provable to doubting strangers on the internet. It's always possible for someone to assert that the whole group of people responsible is a clever fake by the CIA. You can't check it on your desk the same way you might be able to check software with reproducible builds (not that more than a few people do this anyway).


"You can't check it on your desk the same way you might be able to check software with reproducible builds (not that more than a few people do this anyway)."

Everything you just said applies to software as well because you have to trust what your machine shows you. And it might be subverted.

"I don't think it is easy to achieve verifiability to be above nitpicking of the kind we're seeing here."

The most basic verifiability for open HW is to use components that have documented, open HDL with rest of what was produced open for 3rd party verification. Gaisler already did it for a CPU and whole IP library. Rocket RISC-V team did their part. Magic-1 guy did it with wire-wrap on TTL chips. None of these people had $10+ million. Gaisler shows it was just a matter of getting money together to do the development cost-effectively, licensing to commercial users to sustain it, ensuring any I.P. could be re-synthesized for new use cases, and simply offering source for re-synthesis or just verification.

Now, we can go much further than that and tools like Qflow OSS Synthesis are a step in that direction. However, most likely approach given EDA complexity is doing it with industry standard tools with third parties checking those results. Might even do it totally in high-end tools but in a way that end result can be verified to large degree with open tools. Many ways to do that but all need work. Good news is that there's no Snowden leaks, etc suggesting EDA tools are subverted. That helps. :)

So, you're creating one strawman after another to justify either (a) not trying or (b) calling something open that's not. The minimal requirement has been done repeatedly in other projects by other people. It takes no extra money on top of a normal HW development. It can even be done both proprietary and open source as Gaisler illustrated for some of his stuff. The result merely has to be open and done in a way that 3rd parties can verify. Optionally, that they can integrate and put on fab of choosing if they desire.

That's minimum for open-source hardware. A project does the minimum, I call it open-source hardware. If it is not doing it, I'm calling it something else. I'm calling a project that heavily depends on black boxes from DOD-connected and scheming companies as opposite of open in operation as is possible. If the board is open, I'll call it an open-board or open-PCB design. Novena seems to be an open-board system with OSS software. It's not open HW.


"perfect is the enemy of good"

"And when someone does offer something like this to the community, what do they get? Negativity."

Irrelevant: nobody suggested perfection. Matter of fact, all I suggested was for the project to deliver on its stated goal or articles on it modify their statements to reflect reality. Many projects accurately portray their capabilities without financial incentive at all. So, I'm sure this one can, too.

"Fully open hardware is never going to be quite as achievable as open source software because the manufacturing and distribution costs cannot be driven to zero in the same way. "

Fully open hardware has been achieved many times. It just hasn't been achieved at the desktop or laptop level. QED.

"Openness remains a minority interest, and therefore the NRE cost amortized in every unit is going to be higher. Signs of people being willing to pay for open hardware (the original designers who did the development, not the cloners who didn't) are scarce."

Now you have a real problem. You have to get the costs down or just drop the goal entirely.

"It's not even an unachievable amount of money that would be required to do a fully open CPU+GPU by paid professionals. $10m at the bottom end. Much smaller than, say, the Star Citizen kickstarter. What's unachievable is the community unity of purpose behind it."

It's actually easier than you think. All it would take are a number of academic teams, probably less than 10, using their discounted tools and grant money to each produce re-usable parts of a SOC on good process node (eg 45nm CMOS). Think USB, DDR, cache, etc. They've already produced a processor (Rocket RISC-V @ 1.4GHz) and many DSP-style chips up to 160+ cores that can be licensed. Remaining pieces are small plus SOC integration and testing with tooling.

I explained all this here:

https://news.ycombinator.com/item?id=10468534

Using academics for as many projects as possible keeps the EDA, MPW, and labor costs low. They can just public domain everything they build or cheaply license it to commercial projects to fund further development. In it for them is job/tool experience. In just a few years, we'd have every major component we need. Then, a private company (or another academic team) could put it together as SOC leveraging tools they already bought with funding from their own operations, Kickstarter, grant money, private donors, whatever. Many funding models.

So, far from an impossibility, there's multiple paths to get to the SOC's, DSP's, or FPGA's needed for open-source HW. A good chunk of it is already built, some open-sourced, and some just described in papers with enough detail to copy. There's teams all over the world with skills, tools, and funding that can be motivated to do it. So, enough excuses: start cold calling on them. :)

And things like Novena can present themselves accurately in meantime as huge, black-boxes from scheming companies that are connected together in an open-source way. Seeing it that way, though, might make people concerned with their security less likely to buy it. Re-purposed embedded boards are probably a lot safer. :)


huge, black-boxes from scheming companies

My argument is that you (as a customer buying 1x device) can't do your own verification of the device that you buy. For example the Geisler processor ( http://www.gaisler.com/index.php/products/processors/leon3 ) looks genuinely interesting - but it doesn't seem to be a commodity part that you can actually buy off, say, Digikey. If I wanted to sow doubt on the origins of it I'd point out that the parent company Cobham PLC is a defence contractor. We're back to "scheming companies".

At some point you, the end user, are going to get a small black epoxy chip package which you have been told contains a particular chip design. You can't modify or realistically inspect that object. While for software you can at least attempt your own build, Ken Thompson attacks notwithstanding.


"At some point you, the end user, are going to get a small black epoxy chip package which you have been told contains a particular chip design. You can't modify or realistically inspect that object. While for software you can at least attempt your own build, Ken Thompson attacks notwithstanding."

That's true. It's why you have to do a probabilistic approach with open HW whose behavior can be compared to spec in field. Additionally, you can make sure I/O is easy to inspect so a subset of users monitoring for suspicious behavior can catch it. It's certainly going to be a black box but easier to catch problems if open HW. Also, easier to avoid them if you retarget it with obfuscations and mask/fab vendor of your choosing. Much more difficult problem for your enemies than a total black box(s) from one source and set of vendors who are all purely profit-motivated (read: subject to bribes or contract coercion).

Note: Gaisler's stuff predates Cobham IIRC, mainly funded by grant money for embedded and space applications, etc. Plus licensing what he built. He's also gives synthesizable specs and code to his customers with OSS software. Risk level of Gaisler doesn't begin to compare with Freescale, Xilinx, etc. Even post-Cobham because you can inspect the product's source and even strip what you don't trust.

Additionally, your easily inspected software runs on a small, black, epoxy chip. You don't know what that software is doing any more than you know what the chip is doing. Especially if either gets hacked. My point is your arguments counter FOSS software as easily as FOSS hardware on verification side with same results as my hardware security approach. Yet, you still go for FOSS software because you think that level of assurance and verification is worthwhile along with future potential it provides. I'm making same argument for HW while citing other companies and projects that do in practice what I'm encouraging to justify it as practical. The full benefits will take time to accumulate. That's fine as long as we're on that path. :)


You don't need to use the FPGA to use the Novena, it is a peripheral you could even remove (albeit it would be hard since its a BGA).

The tool chain for the FPGA is proprietary and a black box. There are some reverse engineering efforts, but hard to do any meaningful. By and large, open HW is in a fairly sorry state. Most users seem to think zero cost is good enough.


I know. I'm just assuming FPGA is trusted in TCB, esp with DMA, plus implying many people will want to use it. Your assessment of open HW is accurate. Far as FPGA's, there's progress on several fronts. Some for you to check out that your people might even consider using given the continual payoff of a FPGA w/out high unit costs.

Open-source FPGA architecture at 45nm http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-43...

Open-source bitstream generation for Xilinx w/out EULA violation or R.E. (!?) http://www.isi.edu/~nsteiner/publications/soni-2013-bitstrea...

Open tools for Lattice http://www.clifford.at/icestorm/

Open-source HW synthesis flow http://opencircuitdesign.com/qflow/

Note: Cliff is on a winning streak, eh?

I recently did a write-up showing what it could or would take at both ASIC and production levels here:

https://news.ycombinator.com/item?id=10468534

https://news.ycombinator.com/item?id=10468624

So, paths are clear and plenty of potential. Just no uptake. Will take government, corporate, or private sponsors working with academic (low NRE) and professional (experienced) HW designers that open-source stuff as they go. Proprietary, dual-licensed OSS is the only way to go for HW that I know of.


The FPGA on the Novena does not have access to main men via dma or anything else besides the connector. It sits in its own EIM memory space as a slave. (It has its own local external memory though.)

Thanks for the links. I knew about the efforts to generate for Lattice but haven't kept up. The big issue is really good P&R with working optimizer for anything bigger than toy designs and chips. It seems Cliff has done some real progress yes.

And it is interesting that it is basically easier to go open with ASIC than FPGA. The problem is getting your chip manufactured.

What would you say would be the best path for Cryptech to try and go open with the flow? Alternatively, to mitigate black boxes in the FPGA of commercial chips?

We currently fill the Spartan6lx45 on the Novena to max. We target the biggest Artix7 you can design for with zero cost versions of Xilinx tools for our Alpha board.

I've toyed with the idea of mapping a synthetic FPGA into a commercial FPGA and then implement the real use case core in the synthetic FPGA. This would allow greater control on how the design is implemented. But then you need a good optimizer. And the cost to implement a gate will require even more transistors than the FPGA already do. Mapping the synthetic FPGA to the underlying tech can be done pretty efficiently, but configuration support is costly.


"The big issue is really good P&R with working optimizer for anything bigger than toy designs and chips. It seems Cliff has done some real progress yes."

Yep. The only one I know about is VPR:

http://www.eecg.toronto.edu/~vaughn/vpr/vpr.html

Another angle on the topic listing specific tools and issues:

http://people.kth.se/~frobino/foss/presentation.pdf

You might want to test to see if Qflow/Yosys and/or VBR can handle your designs. Need to know where they're at in terms of usability and performance to identify what improvements remain to be done. Also, crypto algorithms are usually small enough to fit on the Lattice FPGA's. So, try Cliff's IceStorm synthesis flow for them. Would be great if that worked given their take-up in embedded and how many algorithms could be put on them.

"What would you say would be the best path for Cryptech to try and go open with the flow? Alternatively, to mitigate black boxes in the FPGA of commercial chips?"

"I've toyed with the idea of mapping a synthetic FPGA into a commercial FPGA and then implement the real use case core in the synthetic FPGA."

That was the recommendation I wrote last night in another comment haha. To be honest, given poor demand, I'd actually recommend you do nothing for open hardware. :( If you're ideological or brave, there's basically three paths you can go down: cheap ASIC w/ MPW's over time; S-ASIC; FPGA clone. Let's look at them in reverse.

Cloning Xilinx/Altera FPGA's was one of my first ideas, too. The concept was that their HLS or place-n-route tools can get it going in an efficient way. Then, OSS tools can suck the bitstream out and re-apply it to my comparable device. A combination of QFlow w/ VPR could be used for open alternative directly to my device.

The biggest risk here, other than NRE costs and tooling risk, is lawsuits from Xilinx or Altera esp over patents. They could make an ASIC look cheap. Still, maybe cheaper per unit if a royalty was negotiated given outrageous unit prices of commercial FPGA's. And you have a box you can trust and use for other products, even license to third parties for onboard FPGA logic in tehir SOC's. Could go quite a long time without a lawsuit while keeping the FPGA part a NDA secret.

Also, worth considering the security advantage of controlling how I.P. got onto the FPGA to prevent external attacks or using anti-fuse variant of Xilinx/Altera to maintain integrity. Software attacks are the more practical concern. I'm not convinced Big Two do enough on that. So, having trusted loading or secure interfaces to FPGA could be justification enough to design one given your market.

Alternatively, fund more academic development of something like Archipelago w/ VPR etc to target it. Keeps anyone from claiming you're using their stuff outside of occasional patent suits which happen anyway. Can also keep the complexity and cost at levels you like rather than what Big Two think is marketable.

The next option, Structured ASIC's, are quite under-utilized. They're basically FPGA's where the routing is done with a fixed layer in ASIC that vendor generates during conversion. That it's only one layer makes it way cheaper than ASIC of same size. That the wires only connect to blocks you use means it goes faster and uses less power. That it's an ASIC, not FPGA, means lower unit prices. Vendors that do this include eASIC's Nextreme and Triad Semi's VCA (their analog blocks can be TRNG's btw). I know eASIC does maskless prototyping for low cost plus has lots of 3rd party I.P. on theirs already. All I'm saying for S-ASIC is get some estimates for specific NRE and volume as jury is still out on whether they're a good idea. Also, make sure your FPGA HW is designed for easy conversion to ASIC up-front as it's not as easy as they promise. ;)

The last option is Standard Cell ASIC. The cheap way of doing this is to figure out the absolute least amount of features (esp I/O interfaces) you need to get the job done on the oldest process node that can handle them. Then, just gradually move I.P. onto it piece by piece with multi-project wafer runs to keep costs in check. The 180nm and 350nm nodes are popular right now because masks are cheap due to fully-depreciated equipment and most bugs having been worked out. I believe the fabs will even hold your mask for you so you can keep making more chips from it over time. Curious what that costs. Anyway, Tekmos has MPW offerings at from 350nm-65nm with a 2.5D option that combines 180nm/350nm logic with 90nm flash memory. Never used them so unsure of cost/reliability.

In any case, your options all cost significant money if you want to hold a chip in your hand that you built. The costs, whether Standard Cell or S-ASIC, are lower than they've ever been. Homebrew FPGA on a node like 45nm (90nm minimum) with academics and their cheap tools doing the development seems ideal as it will keep paying off. Can always use a 3rd party tool for synthesis if OSS doesn't cut it. Otherwise, can put your crypto on ASIC piecemeal or see if eASIC, etc can do it cheap. And safest of all: don't do shit about the problem and let your customers worry about it until they pony up the cash to let you solve it for them.

Wish I had specific recommendations that cost you a new car instead of a house or two. But this is HW development after all. Hope some of this helps in your own explorations on FPGA's or developing your offerings. :)


I haven't looked at the actual board design, but I'm making a few educated guesses based on people mentioning Freescale.

If this is using an i.MX6, one great way to interface the FPGA with the CPU is through the EIM (external interface memory). It allows you a quite fast interface, without giving the FPGA direct access to RAM. It might not be fast enough to connect a 100MSPS ADC, but for many things it's quite great and easy to write both the Verilog/VHDL and the kernel-side driver for.


The FPGA on the Novena is connected to the EIM. The EIM is a bit of a pain to work with. Pavel (part of Cryptech core team) has done a good top level with much cleaned up constraints compared to what Bunnie provides.

If anybody wants to build FPGA designs for the Novena I would recommend to start with our top level design and build setup, just clone and go.

We get good performance out of the FPGA as configurable coprocessor.


Thanks for the tip!


Curious, what's your level of HW development experience for FPGA's JoachimS? I'm keeping my eye out for people who might put high-level synthesis tools I spot through their paces. There's two that I think haven't got much review but have potential.


Fairly big. I am part of the Cryptech core team and has designed most of the crypto cores and the TRNG. I have worked on several FPGA designs including VoIP gateways, transcoders, radar systems, networking etc. focus on use case arch more than low level interface issues. Done some ASICs to.


Sweet. That means you have the experience for this. I'm ignoring the ones that seem like OSS/academic toys or commercial that are focused on too narrow a model. The two candidates I stumbled upon recently are Synflow's Cx language and Synthagate HLS.

Synflow [1] is interesting because it's a step up from Verilog/VHDL w/ open-source tooling. Their strategy sounds smarter and easier than many where it seems to try to augment the human brain rather than replace it. That it outputs synthesizable Verilog/VHDL is nice. Downside is that one of the founders wrote an article [2] expressing that HLS is dead and they transitioned to an internet-of-things company instead. Upside is they probably keep supporting this, anything OSS can be picked up by another team, and he was on HN very happy to answer any questions. I just lack hands-on experience to properly put Cflow through paces and ask him the right questions, you know.

What got me interested, other than Cflow's design, was that they were offering I.P. blocks for around $500. I never see I.P. for three digits. So, I'm hoping it's because they're onto something good. :)

The other to check out is Synthagate. Syntagate's web site [3] details their methodology using Abstract State Machines with two different types for control and data. That's interesting and it's probably expensive. More interesting was Baranov's book [4] that details his exact methodology for doing HLS. Correct me if I'm wrong, but full details of a HLS process good enough to synthesize microcontrollers rarely leaves trade secret stage. The real value here, other than possibly good HLS tool, is a specific method of doing HLS described in enough detail to clone with OSS tools.

Just need someone with HW design expertise, esp FPGA's, to evaluate what's in the book to tell me if it could be used for algorithms, protocol engines, etc. Does Baranov's method look like it can work for real-world use cases and does his tool do the job for a set of I.P. representative of what people will really use? The former you can just read but the latter might be available with free trial or something.

I'm thinking it is his secret sauce because he (a) sells it and (b) seems to have pulled it off his web site. Goofball forgot about Wayback Machine. Enjoy. :)

[1] http://cx-lang.org/

[2] http://www10.edacafe.com/blogs/disruptedhard/2015/04/13/numb...

[3] http://synthezza.com/what-makes-synthagate-different1/

[4] https://web.archive.org/web/20150314182353/http://synthezza....


What do you want to build with these high level synthesis tools?


That's like asking me what do I want to build with this C programming language and compiler thing. You really don't have time to read it all. :)

Main objective is to find a tool flexible enough to handle about arbitrary algorithms and dramatically lower the bar for people/companies wanting to do FPGA acceleration. The less hardware design they need to know the better. That takes HLS. Worst case, the design happens faster and cheaper using fewer hardware developers possibly with less experience.


Sounds interesting btw!


Freescale's fabs in the U.S. are all very old (like early 1990's old). AFAIK they only manufacture analog and MEMS type stuff there. I think TSMC manufactures their MCU, MPU's, and most other digital electronics. This is the 'fab-lite' model.


Thanks for the info. I'll try to double-check mine on that. :)


It kills me that it has no working GPU. :(

I found the wording about XFce to be weird: They claimed to not use Ubuntu because they needed XFce, but ...

http://xubuntu.org/


The GPU is being worked on. The Mesa driver is being worked on as we speak by a number of people including rmk and jnettlet, with austriancoder's experimental repo located at https://github.com/austriancoder/mesa-1. If you want to get technical, we already have fully open-source 2D acceleration, and we routinely use the 2D GPU to e.g. do colorspace conversion and 2D scaling on video.

I went with Debian because we knew some Debian people. At one point I considered moving to Ubuntu, because they package "Firefox" and "Thunderbird" (instead of rebranding them) and they have Chromium as well. There were lots of teething problems, though, ranging from their Xorg modesetting driver refusing to use generic KMS on a non-PCI system, to strange oddities when no initrd was present, to the desktop straight-up crashing when logging in (turns out Unity assumes GL is present and has no fallback.) Ubuntu seems to make many assumptions about running on standard x86 hardware, and was rough around the edges. We could have polished it, but it seemed easier to stick with Debian.


Thanks for the explanation. Debian makes sense in that light.


Xubuntu's Xfce is weird, and heavily changed from upstream, last time I tried using it. Broken themes, incompatible panel applets, etc.


It currently says "We couldn’t use the popular Ubuntu and RedHat distributions because they basically require GPU hardware acceleration". Ubuntu uses Unity (graphical shell for GNOME desktop environment). Unity is very graphics intensive and has a number of other drawbacks. Xubuntu is derived from Ubuntu but uses XFCE desktop environment and has a companion suite of lightweight applications.


A GPU is actually one of the things you could use the FPGA for fairly easy. The question is if you could gain any serious performance benefits though.

The CPU cores runs at 1.2 GHz, for reliable transfer you want to run the EIM interface at 66 MHz. You could potentially run the GPU at something like 100 MHz in the Spartan6.

That is a 10x difference compared to the CPU. So your GPU must provide more than 10x compared to SW to be useful. Not taking parallel processing into account though.


I don't think that's a good idea. Doing it on the FPGA will suck so bad that people will immediately remember why we use custom designs for GPU's. I'm sure most people want to be able to do graphics acceleration and do specialized tasks on FPGA's at same time. Need the GPU for those use-cases.

I could see a compromise where they used a CPU w/ onboard GPU or multimedia acceleration, though. Not sure if it will meet their cost or NDA-free datasheet requirements though.


> This open-source requirement of ours ended up influencing the selection of almost every piece of hardware, including the main CPU, the battery controller, and the Wi-Fi module. For example, we couldn’t use Intel’s x86 microprocessors because they can accept firmware updates that we cannot debug or inspect. Instead we chose an ARM-based Freescale i.MX6 system-on-a-chip, which has no such updatable code embedded. (A system-on-a-chip, or SoC, is similar to a microprocessor except it has more of the supporting hardware, such as memory and peripheral interfaces, needed to make a complete computer.) The i.MX6 does have some code burned into it to coordinate the computer’s boot-up process, but this firmware can’t be changed, and its unencrypted binary code can be read out and analyzed for possible security problems.

I'm confused. They claim the open source requirement prevented them from going with an intel chip, but then they say the reason why was because intel could push firmware updates. Furthermore, the Freescale SoC has firmware built in, but it's not open source.

Is this what we consider open source these days? Sure, a good whitepaper is nice, but it's not open source. Don't get me wrong, this is still nice effort on the open front, but calling this a "laptop with no secrets" seems like a stretch.


The theory is that the boot firmware is executed with the MMU off, and then jumps to your code and is done. Unless you specifically call into ROM, it just sits there unused.

Now it could be that there is some hardcoded exception handler in the memory controller that jumps to a specific area of ROM when a certain instruction is hit, and that causes exfiltration of data. But to do that would require a lot of separate components interacting, including ones that Freescale doesn't have access to (i.e. the A9 RTL, and possibly the PL-310 and the memory controller) and would require Freescale put the data exfiltration routine in ROM, or maybe there's an exfiltration routine hardcoded into the PL-310, in which case I wonder how they actually get data out of the machine.

As you say, that firmware can't be changed. It can be read out and analyzed for possible security problems, and I hope someone does that. The code is mostly concerned with validating signed boot images for "secure boot" where manufacturers don't want third-party firmwares to be used. It checks the firmware signature / decrypts the firmware using a key burned into OTP fuse bits. Novena doesn't use fuse bits, and in fact doesn't even blow the "boot source" fuses, meaning you're free to change the boot source from internal SD to external SD to SSD to booting from USB.

You can actually blow the fuses yourself and load a key known only to you, which means only you can sign firmware that it'll boot.


xobs, What was your experience working with the CPU vendors to open up the ROM or atleast provide a method to by pass the ROM and use an external ROM?


"Bypass the ROM" would be a custom chip, albeit a small variation. So that's not going to happen. However the boot ROM isn't actually a secret, it's right there in the memory map for the processor and as Bunnie says you can just read it out.

Given that it's a ROM and not reprogrammable, and not used after it transfers control to the user bootloader, I think it's fair to treat it more as part of the silicon than as a piece of software. It might be worth auditing for bugs in the boot assurance crypto though.

(If the hardware is malicious, it would be far simpler and less detectable to do it as a silent peripheral, possibly even a whole other processor, than in the boot ROM)


This is purely speculation, but I wouldn't be at all surprised if there was a pin-strapping method that disabled the boot ROM and caused it to be read off the EIM interface, which is probably how the ROM was debugged in the first place.


People are afraid of the Intel Management Engine, and you can read about it here: http://libreboot.org/faq/#intel

It is a dense and very detailed text, but basically your Intel CPU contains Active Management Technology (AMT) which lets remote users control your computer, which may or may not be what you want, and there might be backdoors hiding here.

It also includes Intel Boot Guard, which prevents users from installing their own firmware (such as libreboot and coreboot) because it needs to be signed with a key from Intel.

The page sums it up like this:

> In summary, the Intel Management Engine and its applications are a backdoor with total access to and control over the rest of the PC. The ME is a threat to freedom, security, and privacy, and the libreboot project strongly recommends avoiding it entirely. Since recent versions of it can't be removed, this means avoiding all recent generations of Intel hardware.


Also see Joanna Rutkowska's recent paper: https://news.ycombinator.com/item?id=10458318 (fun reading).


I've asked a couple of times here, but I haven't had an answer:

I've got an X1 carbon where the vPro/AMT/ME setup screen doesn't load (CTRL+P on boot does nothing, missing UEFI binary?). Can I still activate it in Linux for my own purposes?


Intel firmware is obfuscated, closed-source and very low-level which means that it could be running nearly anything -- you don't know what it is.

The SoC has closed-source blobs, but it's all debugging and reversible which is not as good as OSS, but not as bad as opaque blobs from Intel.


I really can't wait for the first fully open down to the metal system. A few arm cores, a few vulcan-ish gpu cores, optional fpga. Just nice compute power easy to tap into for oss devs.

ps: I shouldn't have written ARM, I meant simple ARM-like ISA (from what I understand it's way smaller than x86).


Supposedly the CHIP computer project has committed Allwinner to open source their SoC. I don't know exactly what that entails since it's an ARM core chip, but I'm tentatively holding my breath for that. I will be extremely surprised if they deliver on that commitment, but if they do it could be really great for the open hardware community.


Allwinner uses closed source GPUs, Mali and PowerVR, they have been quite good in documenting their own stuff though.


Is Allwinner actually providing code, or is Next Thing using the linux-sunxi contributions to mainline?


That I have no idea on. I couldn't find many details either. I assume it's because CHIP doesn't want this to be a selling point on the project because most likely it won't happen given Allwinner's past total disregard for being GPL compliant.


Oh, I didn't understand they made AW to open source the SoC..


What do you mean by "open source the SoC"?


You can always go for the Samsung S3C SoC which loads 4kB of code from NAND flash. Now if you ask me whether the loading is software in a certain boot rom internal to the chip or hardware logic I don't know.

You can also try Texas Instruments chips too but I think the initial bootup starts from a ROM.


What's the probability that ARM will open-source their CPU core designs?


ARM isn't open sourcing anything anytime soon. Take a look at OpenRISC and RISC-V. They're aiming at a fully open source SoC implementation in silicon.

http://openrisc.io/

http://riscv.org/


Alternatively there's the Open Processor Foundation, who are taking advantage of the fact that Hitachi's SuperH patents are lapsing over the next few years (some already have): http://0pf.org/about-ocf.html

The official website above has a lot of info, but there is a nicer intro @ lwn: https://lwn.net/Articles/647636/


But IIRC (some blog benchmark) they are very very slow.


Also, the benchmarks we did a while ago on a taped-out chip with our in-order RV64 core, Rocket, showed that it compared quite favorably to an ARM Cortex A5.

http://riscv.org/download.html#tab_rocket_core

Unfortunately, it would be quite difficult for outside organizations to replicate these measurements unless they can pay TSMC for a fab run.


(Disclaimer: I am a PhD student in the Berkeley computer architecture group, which designs RISC-V)

An ISA can't be "fast" or "slow". It's just a specification. There's no reason you can't build a RISC-V core that's just as fast as an ARM or x86 core. The only reason we haven't done so is because we don't have access to the modern fabrication technologies that Intel and commercial ARM licensees use.


Instruction sets very much can be fast or slow, at least in the context of discussing specific use cases. Many of the inner loops that take up most of the active cycles on a CPU today (crypto, compression, imaging, signal processing, linear algebra) have very specific code patterns that are can be targeted with specialized instructions that provide integer-multiple reductions in instruction count.

Let's assume that application-targeted instructions can reduce the size of the inner loops in these applications by 2x. Even the RISCiest cores do not, in practice, run at 2x the clock speed or 2x the issue rate of cores with application-targeted instructions. Thus, ISAs with baked in support for these use-case accelerating instructions will be more performant.

The RISCy core probably wins on mm^2, but a perf/mm^2 analysis will be highly dependent on how well designed the application-specific instructions are for area conservation.


Okay, fair enough. What you describe are essentially extended instructions, and RISC-V has support for it. One of our teams is currently working on a vector co-processor that can get very good performance on compute-heavy workloads.


I really wonder what is the point of OpenRISC/RISC-V, isn't there already open SPARC? I know that they removed quite a lot of things (too much IMHO) in the RISC-V ISA to reduce the number of gates needed, but this has a price: there is probably much more support for SPARC in compilers/applications than for RISC-V.


I know. I was trying to be subtly sarcastic (and obviously failed).


We already have open Sparc. Apparently what we need is a fully open SoC (all the other stuff) along with a CPU that's tested as a full ASIC (as opposed to FPGA) design. Some comments in this sub-thread:

https://news.ycombinator.com/item?id=10458577

Personally I'd love for some foundation to just buy up Motorola and open up everything, but I'm guessing a) they depend on IP they can't open up, and b) I guess Google kept all that, and the Motorola that Lenovo now owns(?) is just a brand?


Moto's chipmaking division was spun off as Freescale Semiconductor. NXP (née Phillips) acquired Freescale earlier this year.

Both FSL and NXP are large ARM licensees so yeah, ain't gonna happen.


Small, since that's how they make their money (design licenses)

But there's a completely open ARMv2 clone: http://opencores.org/project,amber


Which is probably the third open source attempt. ARM usually stomps hard on these project. Anybody remember blackARM, nn ARM?

Clones are even worse out. TurboSilicon was attacked so hard they were thankful to hand all their assets to ARM.


"Instead we chose an ARM-based Freescale i.MX6 system-on-a-chip, which has no such updatable code"

Genuinely curious ... in a previous HN discussion:

https://news.ycombinator.com/item?id=10458318

ARM chips with special "9th cores" were discussed:

https://news.ycombinator.com/item?id=10459158

Do ARM based Freescale chips, like the i.MX6, have "features" like this ?


How in the world does a SMART-compatible Samsung SSD not have secrets?

Did they never read any of Snowden's leaks? :(

https://www.google.ca/search?q=nsa+hard+drive+firmware


It seems that you don't know that they are aware about the topic. In fact they proved the possibility of running arbitrary code on some sdcards' microcontroller:

http://www.bunniestudios.com/blog/?p=3554


Well, I read through the entire article and despite the detailed attention lavished on CPUs and GPUs, there was not a single mention of their disk's security.

It's disappointing that if they are "aware" of the problem, they didn't mention it to their audience in an article about their device's security, don't you think?


We need a cheap general purpose Novena-like laptop.

I'd argue a really good place to start from is a custom Rockchip machine (like Asus C201, which is now supported by Libreboot). Add a free GPU (or finish the Lima driver) and we are ready to go.


I like the project, but I'm not a fan of the designs they're going with for the first run - I'd prefer a model more similar to the one they first teased a few years ago, if only in terms of form factor [1]

[1]: http://www.geeky-gadgets.com/wp-content/uploads/2014/01/Open...


There were a few issues with that design. One was that we never really got hinges working. I bought some friction hinges, but couldn't settle on a way to attach them. The temporary bodge was to 3D print some chucks with an angle on them to use.

The other was that it was difficult to get past airport security. The German agents seemed to at least appreciate that we'd built a laptop that we can trust.

Finally, it was nigh impossible to access things like the FPGA and the serial ports. I find myself using GPIOs all the time to do things like bitbang SWD, and being able to do that by reaching around the screen is handy, not to mention being able to mount the thing being debugged under SWD directly in the case, so it's much easier to take with me.


I'd guess the first model case would be more expensive to manufacture, and the current model they've chosen is more 'modular' and adaptable to ad hoc engineering situations?


I'm hoping this is the start of a trend.

The true start will be once the lowrisc SoC is out, and open hardware devices using it start to flood the market.


If the heirloom version didn't sell out so quickly I might have bought one. Fortunately for my wallet they were gone fast!

Might be worth laser-cutting a case out of nice wood for one though. Hmm.


I´m looking for a laptop that is easy to repair and has open source hardware.

Preferably there would be something like the ITX standard for laptops with different standard designs. Like a ultra book, notebook and different screen sizes. If there was standardized screen you could easily repair your laptop.

All hardware should be open. The firmware should be open with known checksums. Open boot loader. Open source operating system.

All components should be upgradable so that once the laptop speed wise is obsolete by moores law it should be easy to dissamble and recyle the materials. Further you should be able to reuse components which does not age that fast as the cpu and mainboard, for example the keyboard, mouse and display.

We throw a lot of electronic garbage, that ends up being exported to cheap scrap yards in China and Africa, this needs to change if we do not want a junk yard planet.


Fairphone is doing something similar with their upcoming phone. Although the hardware is not open. They should talk with OP.

https://www.fairphone.com/phone/


Thanks for sharing. Just looked on their site, but can't see when the Fairphone 2 will be coming and what other OSes it will be able to run additionally to Android 5.1. Any idea?


What %age extra are you prepared to pay for this?


Why didn't they use an open source CPU? Eg: OpenSPARC


Aside from the fact that we're familiar with Freescale and ARM, many other chips we "could have used" are unobtanium. While it's true that there is a source-level implementation, I can't find any T1 or T2 parts available for purchase. It's the same reason why we went with an A9 instead of an A15 or a 64-bit chip: You just can't buy them unless you're a big company. And if a small two-person company can't buy them, how can we claim it's open source hardware if you can't buy them either?

The other possibility would be to fab a chip ourselves, but that's a whole other order of magnitude in terms of cost and complexity, and the result isn't that great in terms of speed and available peripherals. Plus, when you fab a chip like this a lot of the hardware blocks are IP provided by the chip foundry, e.g. flash controllers and DRAM cells, and those are always closed-source. It just moves the whole thing one turtle down.


Because you can't buy (1) an OpenSPARC?

1:for a reasonable amount of money, paying for a custom ASIC is quite expensive.


May be they wanted good speed, freedom from particularly complicated FPGAs, and not to have to order chips as ASICs?.. I'd like to know too; this could be a good thing to have.


Interesting, never heard of this before. I have backed a pitop on indigogo with the plan of switching out the pi for a beaglebone to get closer to all open components.

Very cool project :)


I have one of these machines and it's been great fun so far to hack on. The development process and the continuing work of the community has been great to be a part of.


Nice effort, but software rendering is a no go in 2015.


1. Open does not mean ugly.

2. C'est trop cher!

3. Is it a manually-capsuled desktop-style laptop?

4. Is it light enough to carry?

5. Where can I get one?


The expression "GNU/Linux is a special version of Linux" sounds a big strange to me. GNU/Linux is an actual OS, not a version of the Linux kernel. That's really the wrong way to explain things.


Maybe this was meant to distinguish GNU/Linux from other Linux userlands, such as Android?

Apart from that, for the layman "Linux" is generally understood as the whole Linux distribution, not just the kernel or the OS.

Still, even from that perspective this wording is quite confusing.


Yes. It would have been nice to say something like, "We decided not to use Android. -- insert reasons for not using Android here --. Android uses a kernel called Linux that interfaces with the hardware. Instead we went with GNU on top of a Linux kernel". It makes it clear what's actually happening. If you've seen an Android machine and a GNU/Linux machine, then you can easily understand what Android and GNU provide on top of Linux.

Still, I was happy to see the comparison even if it was slightly misleading.


It's sad that it's been 25+ years in the making to finally get something sort of okay, and even then it's ridiculously expensive by contrast of closed source. Why can't we have something thinkpad-like for the cost? Don't they know they'd make metric craptons of money on something like that?


> Don't they know they'd make metric craptons of money on something like that?

They would? Thinkpad margins aren't that great to begin with (which is why IBM sold the business off to Lenovo). So now take out every high-volume, low cost component in there and replace it with something less popular. Because you no longer have those economies of scale you don't have the best fabs so the thermal profile of everything in there sucks, so you have to find some way to put the thing in a lap without causing permanent damage. And then you're marketing the thing to the users of the third-most-popular desktop OS, where the breakdown of market share is something like 90/8/2. But then not to users of the most popular desktop Linux distro, or even the top 10 most popular Linux distros, to the people who use stuff like Gnusense.

Where's the metric crapton of money?


You might be looking for the Libreboot X200, a FSF-approved refurbished Thinkpad X200.

http://minifree.org/product/libreboot-x200/


all out of stock?


Yes.

The author of libreboot, and owner of Minifree (former Gluglug) said that he would complete all current orders, so he would mark them as out-of-stock.

You can buy a X200 and free your laptop yourself if you dare :) You can swing by #libreboot on Freenode if you have any interest in the project.


Of course, I dislike how libreboot doesn't do microcode updates anyway. ARM processors have no microcode at all.


I think you should be able to just grab an equivlent x200 off of ebay (possibly with a different wlan card added) - and be safe (as in safe that you can run libreboot, not any guarantees about firmware in all the dark corners, like ethernet card).

I've been considering trying libreboot on my T420s - but as I use it almost every day, it's a bit high risk in case I brick it.


> ETA for restock: 10 November 2015. No preorders, sorry!

First line of the description


In the original blog post for the project, they explicitly anticipated this complaint and stated that this simply wasn't their goal. Novena is not intended to be a mass market Thinkpad clone, it is a laptop made by OSHW hackers for OSHW hackers. Hence why it includes an FPGA and plenty of GPIO pins.


I wouldn't say it's a laptop; there is no integrated keyboard and the screen faces out when closed, so it is unsuitable for transport.

It's a small portable computer that is probably underpowered for everyday general-purpose use.


Possibly slightly off topic, and not something I would normally comment on, but wow - what a lot of adverts! Desktop browser and aside from the full screen ad you need to click through on load, about 3/5 of the screen is adverts.

I've been torturing myself over the idea of having a single banner ad on my site (decided not to for now), but this is something that makes me want to finally give up and install an adblocker.


It looks terrible. These are the kinds of machines that bring bad name to the concept of free/open source hardware. Nothing that is the fault of Bunnie, but something that reflects a sad reality.


Glass-half-full-guy here. Free software, as a movement, started in the early 80's. Writing free software is incredibly easy because you need only a bare minimum of building blocks. Also once the initial tool set is built, anyone can join with virtually no cost to entry (for example, if you had emacs and GCC binaries and a computer capable of running them, you could participate in free software).

Free hardware is completely different. You have to rely on other organizations all the way down the chain. And even if you manage to build a tool set to build something, it's not like anyone else can simply copy it and get started. They end up having almost the same struggle that you did.

Consider as well how long it took for the free software view point to become acceptable to mainstream thinking. Even today, almost 2016, there are huge populations of programmers who think free software is crazy. But now there are companies built on free software with near billion dollar valuations, and other companies mostly built with free software that are in the many-billions-of-dollars of valuation. The concept has gotten over the hump, though it was a huge struggle for decades (and it continues to be necessary to make sure that legislation doesn't take our freedom away).

Software was crazily hard. Hardware is nigh on impossible. And yet, they have accomplished this! It's not sad, it's amazing!

I understand that you wish that the world was made up in a way that would make free hardware easy, but it's not. It never will be. It will take the concerted effort of dedicated pioneers to drag it into the consciousness of the masses. I suppose it is sad that it isn't easier in the same way that it is sad that it doesn't occasionally rain candy instead of raindrops. But that is no reason to be disappointed.


This is a machine for early adopters - not the followers.


The sad part is that we are talking about "early" adoption with just 2 months to go for 2016!


It's about priorities. This [small budget] project priorities open source hw/sw, not industrial design. Good industrial design isn't cheap, easy nor comes for free.


I think its more about people.

I'd love to work for something like this. I might not even need money for it. But I would not do it just to "sleek up this box", I would want to take part in the design process. (Or some money. I can do boring stuff with salary.)

1. These people probably don't know anybody like me.

2. Nobody wants to give away decision power when it's about their pet project.


I'm sure people said the Apple I looked ugly too. That didn't stop it from revolutionizing home computing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: