Anyone considering doing a raspberry pi project may want to look at the new beaglebone black. It is marginally more expensive, has better performance, is completely open and has a fairly good software support community.
One caveat emptor for them vs the Pi though is that it doesn't have the GPU assisted video-decoding ability of the Pi (which, despite not being one of the main goals of the Pi ended up being an important factor in the success of the platform, IMO) and even for just basic framebuffer and 3D graphics they do not support 1920x1080 output currently (and if they do in the future it'll likely be limited to a slow refresh rate due to pixel clock limitations). For many people/projects this doesn't matter and the much better GPIO of the BeagleBone (and the presence of a couple of real-time capable microcontrollers separate from the ARM cpu) would be more important than the video limitations.
Mine arrived two days ago. Beautifully laid out, two symmetrical 46-pin female headers, micro hdmi and microSDHC. Boots flawlessly and looks like a usb drive to Win 7. You need a magnifying glass to see some of the components on its underbelly.
John Clark has just created a new website called armhf.com which has step by step instructions for loading Ubuntu 13.04 into eMMC memory. Site's 20 days old and provides image downloads.
Chose this board as the controller for a Techno-Isel to 3D printer conversion.
I second that opinion. The BeagleBone Black has a whole lot more pins available. Some of those pins are ADC (8), PWM (8), and I2C (2). I think the Pi only has 1 PWM and 1 I2C. I'm looking at a BeagleBone Black for in use of a robot I want to build. I do love my Raspberry Pi, though it's main use is opening my garage now.
This article is a little misleading in that it seems to indicate that FB will trash Xeon servers and drop in ARM processors. Current ARM processors (SOC and standalone silicon) have poor data throughput. You can mask this using media accelerators, e.g. decode x264 video in the GPU so you have smooth playback while saving some data bandwidth. ARMv8 (allegedly sampling in 2014-2015) will double the AHB/CoreLink width and instruction size to 64 bits, but we are not there yet. What is much more likely is that they will incorporate specialized hardware using FPGA/ASIC and use ARM cores as supervisory modules. That way, they can use a general purpose OS and use ARM tools to develop for the supervisor module, and still get the data throughput of the custom controller. This is how data processing is already done on 1080/2k/4k video streams. Xilinx and likely Altera already provide FPGAs with hard ARM cores (in addition to IP for soft cores) which makes it very easy to roll your own data controller with an ARM core managing it. Makes a lot more sense for FB to go this route for custom NAS boxes, possibly network switch/router hardware to follow in Google's path.
I didn't get the impression at all that Facebook was "dumping" Xeon, merely that they were adding ARM.
Throughput is not really relevant. Throughput per unit power is better. ARM manufacturers have been optimizing for power consumption and unit cost for a long time, Intel has been optimizing for throughput and speed for a long time. Both sides are aiming for the same target, from different angles. A "low power" Xeon might be 40W for four cores, but the typical range is 60W - 140W. Calxeda sells a sixteen-core, 20W ARM board. By these numbers, Xeon needs to have 8x throughput of the ARM to beat it in throughput per watt.
Now, this is not entirely unreasonable. I've done benchmarking of my Core i5 against my Atom on an audio sample rate conversion library I wrote, and the Core i5 has 6-7x the throughput of the Atom. So, I would believe you if you said that a Xeon core is more than 8x as fast as an ARM core, and therefore the throughput per watt is better. This is all guessing, based on one benchmark I did for a library I wrote. I'm sure that Facebook has done some experiments with the applications they'll actually be running on the ARM.
I'm not sure where you get the idea that these ARM computers will be centered around specialized hardware, Calxeda's website only mentions general-purpose hardware built around ARM cores.
If your business is largely shuffling data around then your machines may be DRAM speed limited. Sounds like a lower power processor is going to make sense even if it has smaller, slower caches.
I hear Applied Micro will sample their custom ARMv8 chip by the end of this quarter. It's likely we'll see at least the custom ARMv8 chips start shipping in the beginning of 2014. The stock Cortex A57-based chips like what AMD and I think Calxeda, too, are using will probably arrive late 2014.
I think Nvidia might target Project Denver for servers, too, and that's also a custom ARMv8 chip, but I don't think it will be ready until the 2nd half of 2014 (even though they initially promised late 2013). They might be waiting on the Maxwell GPU architecture to be ready.
Focusing on mainly on electronics, I will agree that hardware is having a glamour moment, but not because more funding is available. I am an embedded designer and I work closely with several hardware startups (in silicon valley and other places) ranging from novel LED lighting to data acquisition systems. Every one of those startups had to bootstrap themselves until they were financially viable from sales before they were able to get any outside investments. My sample size is small, however, the data agree five out of five times.
The article is dead on with respect to the falling costs of prototyping, I would also add the falling cost of integrated circuits in general. For example, I can get high quality PCB prototypes for about $30 shipped in one week (5cm x 5cm, two layer, 6/6/6 design rules). This was unheard of five years ago. Furthermore, you can now get a 32-bit ARM processor (which doesn't need many external components) in the same package as 8-bit microcontrollers for about the same cost. You can run a open-source IP/USB/BT stack on those without having to cobble together your own. Having the extra performance available, at the low cost/high integration factor, allows the designer similar agile capability as compared to lean software startups, where you can use available code to prototype first and then optimize after you have your feature set. This makes it easier to develop smarter devices that integrate with a broader software stack, e.g. cloud connected embedded devices.
Looks like the core issue is a poor preprocessor implication. It's a good idea in principle, however, we would be adding new features to address shortcomings in existing features instead of fixing the problems in the existing code.
No, the issue is not just poor implementation. Problems like O(M*N) bloat and scope pollution are fundamental design flaws with the preprocessor header system.
Almost every widespread modern language (all except JavaScript? [1]) has a built-in module system. Adding modules to C is not just some sort of hack to work around preprocessor limitations. Instead I'd say that the C preprocessor is a hack to work around lack of modules.
You can add some functionality to limit scope and implement a caching system (which is also proposed here) to deal with M*N issues with a focus on keeping backwards compatibility. E.g. the caching is done automatically, the scope spamming control is a best effort solution based on static analysis (for example).
Moving from headers to modules is a fundamental change in how the language operates and is guaranteed to further break compatibility between compilers further. I worry that jumping to add these features to the language standard is premature and we should instead look further to optimizations within the preprocessor and linker to see if we can improve performance first.
> You can add some functionality to limit scope and implement a caching system (which is also proposed here) to deal with M*N issues with a focus on keeping backwards compatibility.
You can't cache the AST of a #include'd file without breaking the standardized semantics of #include. ccache (which is what you're basically proposing) gets away with it by just ignoring the standard and shrugging if some legal programs break horribly when it's used. The Standards Committee doesn't have that luxury.
You can't change existing functionality while "keeping backwards compatibility". The draft Modules proposal does a much better job of backwards compatibility than what you're proposing, because it allows #include to continue to have the same semantics it has always had, and even provides a clean way forward for using both #include and modules in the same translation unit.
> Moving from headers to modules is a fundamental change in how the language operates and is guaranteed to further break compatibility between compilers further.
How would a standardized module system "break compatibility" between standards compliant compilers? The title of this story is completely misleading: this isn't "Apple's" proposal, this is about LLVM's implementation of the Standard Committee's Module Working Group's draft proposal.
> I worry that jumping to add these features to the language standard is premature
The committee was worried about that too; that's why the draft proposal was held out of C++11 so that vendors could try out implementing it and see how it fared in the real world.
> we should instead look further to optimizations within the preprocessor and linker to see if we can improve performance first.
It's not as though nobody has ever tried to improve pre-processor or linker performance; people have have been looking at those issues since the beginning of C. There is, fundamentally, no way to improve the performance of the current system without breaking semantic compatibility with existing programs.
You can maintain backwards compatibility while fixing bugs. It's equally a Big Deal to have discipline while working on language architecture to minimize feature creep/bloat.
mobius.io here, we make hardware and were pretty excited when we read http://www.paulgraham.com/hw.html. Got rejected today and checked the server logs, saw nobody ever logged into our demo site. Is this typical or just means that our application was really, really bad?
Well, put another way, it might mean that your application wasn't really, really good. It could have been good, or, in fact, it might have been really good - but, I suspect the YC approach is to start with the very, very good applications. They then try and eliminate some percentage of those through some further due-diligence, bring a smaller group in for interviews, and then finally select a very small percentage of the original group.
Because YC hires primarily based on team/commitment/skill/fortitude/enthusiasm and potential, I'm guessing that reviewing the demo is done after there is some belief that the first properties are present.
[edit: I checked your site - the first two questions that come to my mind are, "What is the potential for this growing into $10B enterprise" and "What is their unfair competitive advantage in building this interface system?
PS - with a bit of work, this would be the start of an awesome Kickstarter. Then you don't have to give any equity away, which, if you really believe in your prospects, should also be attractive)
Thanks for the tips. As for enterprise, we want to eat ni.com's lunch since a lot of threir lower end systems require a computer to operate. We're thinking of doing a kick starter to get 50 to 100 prototypes built and the case design finalized. The hardware is solid and the firmware/server side features are pretty much done.
Feel free (anyone else too) to contact me if interested in a demo. We're in Mountain View.
Probably a better strategy. If you take a look at Jessica Livingston's talk from Startup School (http://startupschool.org/2012/livingston/) she notes that YC gets nervous about hardware--but nerds (like me) on Kickstarter love it. That's how the pebble watch came to grow successfully. Kickstarter is once of the few places you can find and fund really innovative hardware products. Best of luck
Thanks for the link. I figured YC knew the risk if PG asked for more hardware people to apply. It's true that it's very hard to have the same type of growth in a hardware startup as software, but someone has to make the gadgets. Additionally, while Kickstarter gives you funding, they don't provide the same type of mentorship that YC may offer.
Yes, the general fear of hardware in the valley was cured over the past year. There are still issues specific to hardware companies, but it is no longer that big a problem.
Things which are hardware plus service are even more popular.
The initial mobius.io device has 16 general purpose I/O lines, several serial ports, and analog in, so it is much more flexible than Nest. As for Android@Home, mobius also has its own cloud service, so you don't need to worry about infrastructure. What we would like to do is to have the same functionality of something like NI USB-6009 (http://sine.ni.com/nips/cds/view/p/lang/en/nid/201987) without the computer. And we also have LabVIEW drivers for our device. So you can automate things in your business/home from anywhere using JS/LabVIEW/HTTP GET if you wanted to.
Send mail to the following address:
Technical Publications
SGI
1600 Amphitheatre Parkway, M/S 535
Mountain View, California 94043–1351
• Send a fax to the attention of “Technical Publications” at +1 650 932 0801.
SGI values your comments and will respond to them promptly