Bitbake is the generic build system around which openembedded recipes (for particular packages) were implemented. Anyone could create a distro using those recipes, dates to the early to mid 2000s.
Yocto was/is a project of the Linux Foundation, basically a working group, where they looked at the state of embedded linux and said, "We want to contribute to this project with documentation and more recipes", starting externally but with hopes to get it mainlined. Poky was/is their reference distro for this effort.
Nowadays all the packages contributed by the Yocto project have been consolidated into openembedded, but poky remains the reference distro.
tl;dr: Yocto is first and foremost an organization of people. Bitbake is the build system. OpenEmbedded is a community of distro-agnostic build system recipes. Poky is a distro maintained by the Yocto organization utilizing OpenEmbedded recipes.
Let me suggest looking into the `devtool' utility. It's a yocto utility that enables on-the-fly work that the author enjoyed with buildroot. For instance, running `devtool modify virtual/kernel' will place the configured kernel source in a workspace directory where you can grep and modify and patch to your hearts content; on a new board, I might work for weeks in this state bringing up a new board as I patch drivers, or develop patches to play out-of-tree code over the mainline kernel. When I'm happy with my changes, I add them back into my recipe and test it by disabling the temporary workspace `devtool reset virtual/kernel' and building my recipe from scratch again.
Yocto has other amenities that ease iteration on existing boards. For one, it straightforward to cross-compile my python3 extension modules in a recipe in one base layer for my product family. Later, when I'm spinning up a derivative project, I can setup a product-specific layer to override the CPP flags, configurations, or patch the source to better target my board.
The yocto learning curve may be steeper, but the benefits of proper dependency tracking and layers far outweigh the drawbacks. At this point, if I use a board from a vendor that ships a buildroot BSP, I'll take a day to port it to yocto before moving further.
However, if you're doing this full-time, and you want to do anything remotely complicated (you will), and especially if you have multiple products (you will), you start yearning for OpenEmbedded. It takes some time to learn, and the abstractions are hard to understand outside-looking-in, but it's well worth the effort.
bitbake -c populate_sdk <image-name>
This will output a self-extracting archive in your deploy directory. You can install this mostly self-contained SDK to a directory of your choosing. Then source the environment file it installed to get the correct build variables setup (CC, CXX, CFLAGS, LD, LDFLAGS, PATH, etc). From there, the variables will be setup to use your embedded image's sysroot. I do almost all of my userspace development using the SDK. If you're using CMake to build your native code, it will pick up the variables for you.
There are some gotchas with it, in particular, there are some modifications that might be necessary for your configuration to get the SDK to include more things relevant to your build. Probably the most notable is that static libraries are not output into the SDK by default.
I've used bare crosstools, buildroot and now yocto in production projects spanning 14 years. Personally I find it a lot faster to move with yocto once you grasp its principles and structure: especially if you have several related product lines that can be expressed as combinations of layers.
There is a significantly steeper learning curve for Yocto when compared to Buildroot. Buildroot is faster for the initial build, but often slower than Yocto after the initial build.
Here's what I like about Yocto:
1. It forces you to be organized, everything has a home and things can't conflict with each other.
2. By using Yocto's shared state cache, you can have one central build server and automatically fetch pre-built artifacts on other computers. With this I can get a developer hooked up with everything that they need to build a full system image on their computer in just a few minutes -- and completely build the image at that time.
3. I am confident that incremental changes are built correctly. If you change a build-time parameter of a package in Buildroot, things which depend on that package are not rebuilt. This is not the case with Yocto. This can also result in unfortunate rebuilds of many packages just because of a minor change to, say, glibc. I know that they do not need to be rebuilt but Yocto does not.
4. Buildroot puts built items in a single staging directory. Package install order differences mean that you can overwrite files accidentally. Consider /usr/include/debug.h in two different packages, or something like that.
If you are not explicit with dependencies, the build may actually succeed but it may not be deterministic. If package A happens to be built before package B, you're golden. This does not always happen, and sometimes this is not found until you do a clean and a rebuild. Yocto forces you to be explicit -- the build tree only includes artifacts for recipes which have explicitly been defined.
5. Yocto can use the same tree and shared state cache to build multiple images for a given product without having to clean the world.
I loved buildroot -- it was fast, nimble, and easy to use. It also lets you cut corners and find yourself in situations where builds would unexpectedly fail after a clean. I am also very happy that I took the time to learn how to effectively use Yocto.
Part of that fault lies with the hardware manufacturers. They are invariably hardware companies that don't value software. They pick an open-source project like OpenWRT or Buildroot and literally hack at it until the resulting monster can build an image for a reference system that can stay up just long enough to pass a few end-to-end tests. And the damage is incredible. Nothing is spared mutilation at the hands of their incompetent developers, the entire software stack from u-boot, over to Linux, across essential system services and concepts all the way up say the LuCI interface OpenWRT ships is modified, mostly haphazardly to support one specific configuration. The resulting garbage is frozen in time, zipped up and thrown over the fence to their partner companies trying to turn their hardware into a portfolio of consumer products increasingly defined by software first. It's hard to describe the level of stupidity; they will base their shitty proprietary Linux modules on LTS versions of the kernel, then never update anyways! They adopt "standardized" upstream things like nl80211, then require you use all the proprietary interfaces they previously had and just stuffed into some side-channel.
The other problem is using something like OpenWRT or Buildroot in the first place. This is not to disparage these projects, obviously these are mostly driven by hobbyists who are free to use their time however they want. But there is certainly a tendency in these projects with adherence to arbitrary, mostly terribly old and shitty Linux standards and 'practices' grossly unfit for what you would want in an reliable embedded systems. There is a focus on breadth, expansion and freedom instead of relying on robust building blocks. They try to collect the entire history of open-source software and bend their build systems to make and package the original .tar.gz downloaded from some FTP server. Shell scrips rule supreme, not just in the build but often on the resulting firmware images. A lot of these choices are supremely unfit for the purpose of making long-term supported firmware for embedded devices.
Lots of praise here for Android. Sure, they started with the same recursive Makefile stuff in their original startup roots. But they iterated. They saw the problems. A monumental achievement in the field to have a build system that will first draw up a plan, then go about executing it with considerable chance of success instead of failing randomly in the middle of some lazily recursed Makefile. They critically look at all the pieces that build and end up running on the device; they standardized on the Clang toolchain, they don't try to give you a choice of three compilers and four standard libraries. They didn't scare away from the long haul of pushing that singular toolchain across the entire stack; being able to compile the Linux kernel with Clang is the result of foundational Android work. They revolted at the sight of glibc or uclibc and build and maintain their in-house libc, on a tight feature leash. Their focus with bionic isn't to be truthful to some obscure corner of a POSIX standard circa 1983, it's to enable things like a safe allocator or uniform crash handling and report generation across all of userspace. Any sort of shell is intentionally hamstrung and scripts absent. No patience for oldschool crap like SysV init here.
Just as a data point. Google WiFi is built with Qualcomm WiFi radios, but it uses none of Qualcomms proprietary software. They preferred to use the open-source upstream drivers. Zero confidence in any of Qualcomms "software".
I thought the Android team wrote bionic just to avoid the LGPL. Do you know for sure that there's more to it than that?
I have tried Yocto through the years and deployed a bunch of projects with it. Although it gives you the sense that it has a more coherent environment than Buildroot, I find that it is difficult to maintain since it has too much unnecessary complexity for embedded targets, and sooner or later that ends up getting in the way and biting you back. Not enough visibility, which is crucial for embedded systems. More so if the systems have network connectivity and need to be properly secured.
It could be that I am too acquainted to Buildroot's pretty flat and its spartan, no frills architecture. With highly constrained devices that is an advantage. Automating configurations takes low effort, they build quite fast, and you can maintain testing and deployments easily with your own bash or python scripts. There are few places to fiddle with variables and you can easily assemble images on the fly or through overlays. Many times you just need to modify the configuration file and rebuild.
The last years I have been progressively using Buildroot along with docker. They complement each other well and you can create base systems and then grow them tailored to your application taking advantage of the docker incremental build system. I regularly maintain cross-compilation and qemu-based docker image toolchains for my target architectures that way. They can be recreated and tracked reliably, 100% of the time. I use them to build the application images or just augment them with a buildroot overlay with the extra files and then also deploy them remotely through the docker facilities.
I played with Bitbake, but the learning curve seemed much worse than ltib.
I've found though that making your own image classes to be the most direct way of formatting your image if you need something exotic. The image classes system is kind of fun - you can keep stacking on more reusable classes that act as a kind of pipeline.
To get it working on AWS I ended up doing what I needed to do by spawning a VM with an off the shelf AMI, attaching a second disk, dd-ing my image on, and then capturing that as an AMI. (I'm kind of annoyed that I couldn't find a proper API for this, but maybe I'm not looking in the right place. Their import tools all want to understand things about your image and mess with it, and I just wanted to send them a flat disk image.) This process was an annoying enough slow down for testing that I ended up making an image that would boot up just enough to get networking up, then fetch the real image from S3 based on directions in user-data, over-write itself, and reboot. If going with newer AWS instance types, don't forget to include their kernel modules, and have fun debugging when it doesn't boot :)
'make list-defconfigs' will list everything that is available
The A/B update thing typically works by having two disk partitions and then using your boot loader to switch between them and only updating one or the other. You can probably write a u-boot script to track failed boots and switch between images or something like that though I've never ventured down that road.
you can also easily strip them down and get itty bitty tarballs suitable for import as a docker container images
I can understand why the author focuses on cheap entry-level parts. It's cheap and it's fun. He also kind of dismisses the major SOMs for this reason so maybe there can be a more comprehensive review of those (I build almost exclusively with these), but that's probably out of his view.
Going with a mystery AllWinner part is really compelling, but I can't go into production (with a 5+ year expected run lifetime) on that.
Companies like NXP will guarantee certain parts for at least 10 years after launch date. Search for 'i.MX' on this page to see their date guarantees: https://www.nxp.com/products/product-information/product-lon...
I’ve been happy with NXP (Freescale) for the former, obviously being a large automotive supplier has a benefit.
For the SOM I’ve recently been using Toradex. They do a great job of making their SOMs comply to a common signal layout at the connector. I also recommend Boundary Devices. TechNexion is also interesting if you need a smaller footprint. They’re the group that also ships as WandBoard.
Many years back, I also did a writeup (much less intense) that goes more so into what it took for me to get Linux running on an IMX6 based Soc on the software end, including how I found a bug in the USB clock tree in the kernel for the IMX6, how I debugged it, and how I mainlined it (my first and only kernel commit to mainline).
Such projects are increadibly satisfying to work on, it makes you feel much more competent when working with embedded systems. There is much less "black magic" going on.
This embedded Linux article is a really worthy followup!
This is an astonishing amount of effort!
So a big thing people should consider when building something to scratch. Am I making something I can use again when I make my next product. Its a much higher mental load if you have no commonality between products. Also it can cause problems for sales, support and management when they have to remember nitty gritty details about each of your very divergent products.
When you wear many hats you have to mentally task switch between domains and this time can sometimes be measures in weeks (example: you've been doing mechanical design and you now have to switch to FPGA design after months of not touching the codebase).
This is why jumping on the latest and greatest trends isn't always the best idea. Languages, tools, frameworks, new chips, new FPGA or CPU architectures, etc. Things that don't mesh at a certain level impose a cognitive load that can really slow you down.
The simplest example of this I have from years back would be using C and Verilog on embedded systems. While Verilog is not software, it's hardware the "C" feel to the language --as opposed to VHDL-- makes for an much easier transition between hardware and software development, at least for me.
Although I'm mostly focused on the app code, I spent about four months this year debugging various wifi stacks (one for each generation of product). Only to have most of the problems fixed by moving all products to a 5.7 kernel base. Which was no easy project in itself, but mostly not my work.
In my experience, the processor, memory, flash, and all other things combined are trivial compared to getting reliable wifi.
At a casual glance it looks like your products are mostly [processor] connected to a network of CAN and/or 4-20mA devices. Not possible to back-port newer control modules to older product lines?
That said, impressive that you are handling all of it on your own. Hire somebody!
I have back-ported somethings, but really a large chunk of the products had terrible technical decisions before I started. I spent the first couple years just dragging the company back from the technical and financial brink. I'm slowly eliminating old products from our sales sheet in favor of ones I engineered. So it will eventually be ok.
The fact that almost everything is ARM now makes it a lot easier but the on-chip peripherals and tools are usually still very idiosyncratic.
Some others that I have enjoined in the embedded Linux realm are the posts by George Hilliard, Mastering Embedded Linux series and the Designing my Linux-powered Business Card.
I have seen this exact confusion with many friends and colleagues in the past who don’t understand just how different a micro controller is from an application processor, and what a difference it makes and even simple things (ha ha simple) like circuit board layout.
And application programmers are so used to things like virtual memory and thinking or hoping the system, never mind an application garbage collector, will manage memory that when you suddenly tell them their stack is limited to a few kilobytes and please please limit your heap usage, too they’re … Cast adrift!
When embedding an SBC, you can fork the firmware from whatever Debian/Android/etc. variant is best supported by that SBC. Stripping down working and well tested Linux image is much easier than building a new OS image from scratch.
On a past project we had two DDR RAMs and a NOR Flash on the same bus. There was one hardware engineer, and he laid out the memory subsystem in about a day. (The power supplies were dramatically more complicated.)
It almost worked first time. I noticed about one memory error per day per device, which I repro'd with repeated `md5sum /tmp/largefile`.
We just dropped the clock rate in U-Boot.
Failed devices? Heat it up with a hot air gun for a while. Fixed 90% of them. Still doesn't work? Throw it away.
There are no mysteries, just stuff you don't know yet.
Once you get into flyby or T-routing topologies - good luck!
If you are playing with a 2x2" EVK then 8 or 10 layers isn't a budgetary problem. When the board starts getting larger it's now it's a huge cost adder as opposed to low speed 2-layer PCB.
This is why designers switch to doing it with SOMs. The SOM can be a high-speed part and the carrier PCB is cheap and slow.
PCBs are built in a sandwich stack. The middle dielectric layer, or "prepreg", is way thicker than the dielectric layers in the outer layers. This doesn't seem like a big deal, but for the speeds that a DDR interface runs at, it creates a higher impedance return loop for whatever you end up routing on layer 3. (Typically DQs, in my experience.)
You can avoid this entirely by building with a six layer stackup, and routing your DRAM signals on layers 1/2/3, keeping layer 2 as an unbroken ground plane. The result: nice, low impedance return tracks.
Yes, Jay is right - most of the time, this doesn't create problems. But when it does, it's pretty challenging to fix without a board spin, and when you're doing this professionally, that loses you time, creates risk, and is generally hella stressful.
And you still need to pass radiated emissions testing. Even those right-angle DIMM connectors for the SOMs spray RF noise everywhere when you put video on them.
If you are familiar with gentoo it has a package manager called portage. You type “emerge packagename” and it installs it. Prepend ROOT=blah and it installs it in that dir instead!
So your build scripts are pretty trivial. Just write a bunch of emerge lines and you are done. Install in your new root, then package that ROOT. Simples!
Building from source could be slow, so cache binaries once built. Subsequent binaries will be used by adding “-k” to emerge above.
Profiles allow you to build target versions of software, E.g. customising compile flags or software versions. I use both a libc and a processor profile. The arch profile is also split between host and target, so for example static libs might be installed on host but not in target.
I used an overlayfs system to trim the delivery.
I think the big difference is I can rely on this huge distribution. Yet if I need some platform patch I need only drop it in my patches directory and hit rebuild. Put the patches dir in git and I’ve got a trackable distribution!
To be honest, bitbake mostly seems to be a less principled/polished version of portage or nix. I would much prefer an embedded toolchain based on either of them than dealing with bitbake's syntax and quirks. The one thing that seems relatively unique to bitbake is layers, which essentially mean you can avoid making modifications to the base layer while still being able to patch/tweak just about anything in the build. For embedded stuff, where you're basically almost always working off of someone else's patches to an upstream project which you are then patching yourself, and all of them can update independently, it's the only way of keeping somewhat sane (but the action-at-a-distance can make for difficult debugging, which is something I would wish bitbake was better at: there's no way to query e.g. 'what statement in what file added this compiler flag?', and the set of possible files can be huge).
Zooming out to a broader management context, non-trivial systems usually comprise multiple processors. In these circumstances production volumes are generally lower and subsystem iteration more likely, therefore it makes more sense for managers to worry about overall architecture (eg. bus selection, microntroller vs application processor, application processor architecture) and consider factors such as speed of prototyping, availability of talent, longevity/market-stability of vendor's platform, feature-set and nature of software toolchain(s), etc. rather than worry about optimizing specific processor part selection.
The management priority generally only flips closer to the article's fine-grained approach in the event that you are designing a consumer electronics widget or otherwise very high production run product with a low change expectation, where individual processor selection and BOM optimization become greater concerns.
Conclusion: For most people, for most products, most of the time, you don't want to start by worrying about specific processor part selection. Further, prototyping should be done on third party dev boards/SOMs and when production volumes justify it, final PCB design can be handed to an outside EE. This is not always a viable strategy (eg. due to form factor or other application constraints), but it's fast, cheap and hard to go too far wrong.
The only thing that SOMs provide is a processor + DRAM + PMIC. If you practice and become proficient at designing around application processors, it should take you no longer than 3-4 hours to get this component of the system (the processor, DRAM, and PMIC) laid out when working with these entry-level parts.
SOMs aren't some magical remedy to all the problems. It's still up to you to design the actual system, which takes hundreds of hours. The difference between using a SOM or a raw-chip design is negligible at this point.
I have no problem prototyping on EVKs --- in fact, I link to EVKs for each platform in my review. But a lot of these evaluation boards are pretty crummy to prototype with; some don't have all the pins of the CPU brought out, others use proprietary connectors that are a hassle to adapt to your hardware. You shouldn't be afraid to spend an 8-hour day designing a little breakout board for a part if you're interested in using it in a product that's going to span 6-months' worth of development time.
Of course there are caveats. I'm entirely focused on entry-level parts; if you need a Cortex-A72 with a 128-bit-wide dual-rank DRAM bus, sure, go buy a SOM. Also, it should go without saying that it completely depends on you and your company's core competencies. This article is aimed at embedded designers who are usually working on hardware and software for microcontroller-based platforms. If you work at a pure software shop with no in-house EE talent then this article is likely not relevant to you.
Obviously lots of applications it would be impossible to use a 264 bga but I've been there with projects with 3 CAN buses and Ethernet plus flash memory and thought maybe we should be using a cortex A7.
I'm sure you know but file storage on microcontrollers can drive you insane ha ha and the same with graphics+TCP IP
> when compared to application processors, MMUless microcontrollers are horribly expensive, power-hungry, and slow.
What is it about the lack of an MMU that causes a hunger for power?
A microcontroller usually doesn't have fancy out-of-order execution, fancy caches etc as that would make the execution less deterministic in time. A MMU would as well.
Lacking these features also make microcontrollers a lot slower, and I'm guessing he's thinking about cost/watts per MIPS or something like that. Yes the application processors draw more power overall, but (I assume) they are so much faster it more than makes up for it in dollars per MIPS or watts per MIPS.
So it appears more of a symptom than a cause. But again, I might be wrong.
You're right, I was thinking of the A8, A9 and similar.
> the MMU is the big dividing line in my opinion
I'm no expert, but seems like a reasonable line in the sand to me.
This paragraph is confusing to me.
- How would a MMU help for malloc to execute "swiftly and reliably" ?
- What does "actually access a memory location" implies exactly?
- "since Linux frees and reorganizes pages behind the scenes": Is this not what people dislike from malloc? That it's none deterministic?
In other words without MMU all programs share the same memory space, and if it get fragmented, generally you'd have to reboot. With MMU fragmentation can still be an issue, but it's greatly reduced, since each process has its own memory map. And if memory of the process gets framgmented, you can just restart it. If runtime supports object compaction/relocation, then fragmentation may be not an issue at all.
Re access to memory locations -- mostly any direct access to instruction or data memory in userspace program automatically goes through MMU remapping.
1. I once had a temperature/humidity sensor, but never figured out if it was analog or digital (it had three prong.
someone in field thought it could be digital). What would be some ways/tooling needed you could use to independently test this out?
2. In/Out on embedded boards is any suggested way to capture reads (or write), for example raw data? What about the other way around?
3. Which board here might be good to practice (or something else) doing a hello world on making device drivers. Something on easy, learning side and possibly practical that interacts with a real device.
The advantage of Arduino is that it will give you a good introduction to embedded development and I/O principles. Working with Arduino, you'll find that the CPU is reasonably capable but that the programming abstractions are clunky, and that'll give you the excuse you need to get comfortable with direct I/O port access. What you learn along the way will apply on more sophisticated platforms.
As for the temperature sensor, it's most likely digital (I2C or similar interface). Without the part number it may be difficult to get it working. If you don't have the part number, it's best to chuck it and order one from Adafruit or Sparkfun that comes with the necessary documentation and support code. Reusing salvaged parts isn't the cost/productivity win that it used to be.
If you need more horsepower for your project than an Arduino or Teensy-class board can provide, then you would want to look at Raspberry Pi or Beaglebone Black. If you aren't already comfortable with *nix development, you have a massive learning curve ahead, for better or worse, and you're going to be spending a lot of time in Professor Google's classroom.
He's posted some stuff on twitter that alluded to this work - lots of debugging DRAM and swapped bit signals.
Linux is in C though. So, what is really going on?
Does anyone have suggestions if one wanted a super fast single core? i.e. to make something for a software synth where the bottleneck is likely to be speed of a single core?
He sings the praises of Linux memory management and virtual memory, which are unavailable on a microcontroller. Would be interested in comments on the downsides of swapping in a system with a small amount of onboard Flash that may wear out.
If you're out of RAM on an embedded project, rethink your design or add more RAM (if time and budget allow)
Virtual memory is what allows for copy on write pages when forking. It is the preferred basis for implementing memory mapped files. It allows for sharing a single copy of shared library in ram, without causing everything to break if one program decides it wants to patch "its" copy of the library in memory.
Yeah sure it allows for paging out anonymous memory too, but that is hardly its sole function.
I am currently using the Allwinner V3s, and the review is covering this part accurately.
Regardless, that wasn't really the point of the article.
But agreed, those aren't the point of the article.
"Embedded" is a big space.