Is this pretty expensive though? "Just the board" costs more than a fully-equipped laptop. I understand this is not for the mass market given the pricing and other things, but still...
Whether it's expensive or not depends on your point of view:
"Interestingly enough, someone in the audience also asked about the price point. Cross replied that the comments break down into two distinct camps: FOSS people say "wow that's an expensive laptop" but FPGA users say "wow that's so cheap for a development kit!"
It's got a mass of relatively expensive hardware components that have (almost( never been included in a laptop form factor and certainly not together. Novena can replace quite a few expensive discrete tools and do it in something that can be comfortable carried around and modded. What you're asking is basically why a full shop with a CNC mill installed in a large truck is so much more expensive than a nice pickup when the latter can go much faster and is a lot more comfortable. They share a category and vaguely similar basic shapes, but they're not really the same thing and you don't use them as if they are. Is that a helpful analogy?
Field replacement for oscilloscopes & some logic analyzer uses. Software Defined Radio host (the expansion card is in development). Hardware reverse engineering including bus-sniffing something while it's running. Feeding signals into stopped or running hardware to plant new firmware in it. Sniffing crypto keys from running hardware - Bunnie Huang is one of the core Novena people and he was the one who used an FPGA to sniff the Xbox crypto keys to allow Xbox Linux to boot. Keyboard & LCD can be switched at will by a user. The FPGA can be programmed to do many things faster than a general-purpose CPU and can do fast/wide data capture a CPU simply can't keep up with. The expansion slots have room and interface capability that can support a lot of potential new hardware including home-built modules. The list goes on from there, but my fingers are going on from there.
The CPU & graphics are almost more of a programmer convenience than a fundamental selling point. You've got to have them, but they're there to control other things and do some basic number crunching.
Any time I design a laptop into, say, a robotics system, I have to buy at least a dozen spares and store them away, because five years later when the laptop dies and a $500,000 machine that generates $1,000 per hour in profits goes down, that model of laptop will no longer be available. With open-hardware like Novena, I have a good chance of being able to make a new one that is compatible five years down the road, and can get by with two spares in storage.
Never say never. With the benefits of Moore's law approaching zero, specialization of CPU's seems to be the way to go. Intel saw the writing on the wall and partnered with Altera, even lending them their state-of-the-art fabs: something one would have thought will never happen. http://www.altera.com/devices/fpga/stratix-fpgas/stratix10/s...
For many tasks, from games to databases, FPGA's could provide huge benefits. The only reason I can see why FPGA's weren't adopted by mainstream PCs is that improving CPU's was so much easier. But with the ever-diminishing returns from x86 improvement, I can very well imagine that FPGAs could become viable in the mainstream.
I don't see FPGA acceleration being useful in mainstream computers soon. By the way, AFAIK the one in Novena is primarily intended for data acquisition, not acceleration.
FPGAs are generally good for accelerating data-parallel applications, but we already have had SIMD and GPGPU for a while. Both these technologies are only used by a small subset of the applications which could benefit from employing them. Why? I would say poor tools and abysmal programmer literacy. Automatic vectorization for SIMD sort of works, but it tends to miss lots of opportunities. Automatic acceleration with GPGPU is pretty much in the research phase. Manual development for SIMD and GPGPU takes skills that most developers don't seem to have at the moment and the trend towards high level, highly abstracted imperative languages isn't helping.
I guess at this point it might seem like I'm contradicting myself. The software side of things is lagging behind for GPGPU and SIMD, but these technologies are still mainstream. Why wouldn't the same happen for FPGA accelerators? My answer is cost. SIMD requires just a few small functional units and registers, a fairly small chip area in the processor compared to caches; the general purpose processor is itself a fairly small part of a SoC. GPGPU didn't require significant architectural changes from the shader model. FPGAs large enough to be useful, on the other hand, are expensive. Maybe economies of scale might make a smaller FPGA cheap enough to be included as an accelerator, but as far as I understand high transistor count / large area / low yield is unavoidable.
Plenty of exciting stuff going in research, but automatic use of accelerators would work much better by moving to a dataflow model of programming.
Personal prediction: non-monotonic same-ISA heterogeneous computing is going to be the next big thing, maybe in the form of reconfigurable pipelines. A bit more far fetched: phase-change materials for very aggressive short-term DVFS to lower latency on mobile.
I would bet on hardcoded accelerators before FPGAs. (Apple's A8 looks like it is already half accelerators.) FPGAs are good for developing accelerators before putting them into an ASIC or shipping low-volume accelerators (like CAPI or Bing result ranking).
Sure, the average person isn't going to be programming the FPGA themselves, but when they can download someone else's of apps for it at a moment's notice and minimal cost . . . who knows? After all, most people can't write software themselves either.
Yeah, looks kinda expensive. And not too fast, too: 1.3 GHz quad core CPU's in an age where there are at least twice faster CPU's in smartphones already? Even 1.8 GHz octa-core Exynos 5420-based ARM Chromebooks are laggy, judging by reviews.
Of course, I understand: doing hardware is hard, and there is an acute shortage of open source ARM SoC's. Still, I wish they could figure out a way to provide more hardware for a smaller price. If nothing else, it would help the adoption of an open hardware laptop greatly: it's kinda weird that all this work is so that only some 1,000 people would get their laptops.
1.3 GHz is not bad at all for an ARM. It's not trying to compete with x86 for the performance crown. It's decently fast and runs linux apps like a web browser just fine.
I've been using ARM machines as a desktop replacement for a few years now. Native applications run just fine on fairly modest hardware. However, for web browsing, a quad core Cortex-A9 around that frequency and with a 32 bit DRAM interface just crosses in the usable (but somewhat laggy) category. Cortex-A15 with a 64 bit memory interface feels much closer to modern x86 machines.
Novena's 64 bit memory interface might actually make a big difference. From my tests, a 1080p 32 bit framebuffer uses up around 1/4 of the system memory bandwidth with a 32 bit DDR3 interface.
I also get the feeling that desktop browsers (chromium, firefox) aren't very well optimised for ARM.
Just curious, which ARM machines have you been using as desktop replacements? And why bother, since, I presume, none of these are mass-market products? What advantages do they have over x86 machines?
I'm mostly working on software for ARM and since all the tools I need for work run well enough, why not? I still use x86 at home.
> which ARM machines have you been using as desktop replacements?
The ones I've used most were IMX53QSB, then ODROID-X2, ODROID-XU and I'm now switching to ODROID-XU3.
> What advantages do they have over x86 machines?
I was able to get rid of the x86 desktop while only using ARM hardware I would have used anyway. Also, the boards have a tiny footprint (you can tape several to the back of a monitor for example), they're very low power and can be passively cooled. They're very cheap too.
I've also used some RK3188-based mini PCs dongles for their portability. You can carry a reasonably fast GNU/Linux machine in your pocket and don't cost much more than Raspberry Pi, which I find way too slow.
There are varying degrees of "fine". I'm sure it's acceptable, but according to tests, even Chromebooks with octa-core 1.8 GHz Exynos chips are somewhat laggy compared to Intel-based alternatives. Chrome OS is basically a browser on Linux, after all.
I know the variety of hardware; started with ZX Spectrum (3.5 MHz 8-bit Z80), programmed TCP stack on a 25 MHz 16-bit CPU and I could be fine with a six-year-old laptop (typing on one, actually). Still, premium-priced laptop could use more power; especially if it's ARM, there's never too much.
that speaks more about the architecture and requirements of ChromeOS than about ARM SoCs. For example, I have Firefox OS running pretty well on an APC Paper with specs so modest that they are trumped by most mid-range smartphones these days. Both systems are very similar in the sense that its a linux kernel and the interaction with the user happens on HTML5 based apps but one is striving to run on single core with 128mb of RAM and the other is running on Chromebook Pixel which is one of the best laptops available in the market today.
The poster you replied to said that you can run apps LIKE a browser just fine. Not that you should run (web) apps INSIDE a browser. When running proper native code works just fine why would you choose to run web "apps" anyway?