
Ask HN: Why is the software ecosystem of single-board computers so ugly? - hexman
e.g. Raspberry Pi or Cubieboard
======
ChuckMcM
The answer is that the interests of the creators of the chips powering these
boards generally aren't aligned with non-commercial developers. So if you're a
"hobbyist" you are not relevant.

Interestingly from the Commercial side, it is very different. If you've got a
single board computer as part of your product and you are a going concern with
the necessary legal NDAs and what not in place, the manufacturer will send one
of their engineers to sit in your cube an pair program with you until your
system is running the way you want. They will create custom releases of their
binary blobs that do the things you need them to do in order to make you
successful. That is so that your product ships and you start ordering a
million a month of their product.

On the flip side there aren't too many really "open" SoCs, not like the old
days where the data sheets told you everything. So things are a bit more
challenging. I had hopes for the Zync series from Xilinx as they had the
potential to be the basis for a good "common processor" base, a dual ARM 9
enough FPGA fabric to make a classic frame buffer, etc. But the number of
people who want that sort of system is measured in the thousands, not the
millions. No way to make a living at it, no way to sell it for what it would
cost to support.

Intel has been bending over a lot however in order to try to take share from
ARM. So they will talk to hobbyists about their smallest computers. The
Galileo, compute stick, what have you. So there is an opportunity there, for
the moment they are aligned with anyone trying to give them exposure.

~~~
gozo
I agree though I would add that much of this is because android "won" linux on
embedded. Today it (unfortunately IMO) seems far more likely that a hobby
ecosystem is going to trickle down from android than the other way around.

------
miratrix
I'm assuming you're talking about Platform / BSP side of the ugliness.

You have to look at the lineage of how things evolved to the current state -
on the PC ecosystem, everything is already on enumerable buses (PCI, USB, etc)
with standards (PCI, UEFI, etc) describing how it's all supposed to find what
device is connected where and have it all work together. The incremental cost
of opening that up to the public is thus fairly small since you need to build
your platform to adhere to the standards that are already there anyway. That's
how you get to being able to boot a kernel image on a random system or insmod
a random driver you found and expect it to (mostly) work.

In the SBC/Embedded ecosystem, there really aren't any standards. Since
internal topology of each SoC is different and the pace of new SoC releases is
so high, there's no time for standardization - you throw in random IPs from
bunch of different vendors, figure out how to connect it all together, and get
it to the market. In this scenario, having something documented is actually a
negative thing - once something is documented, people expect it to work the
same way going forward. You can hide a lot of hardware deficiencies in binary
blobs, something that's very difficult to give up. Thus, there's a huge
disincentive to provide full hardware documentation. I'd imagine that in some
cases, for licensed IPs, the SoC vendors may not even be allowed to do so even
if they wanted to.

Things like DeviceTree is trying to nibble around the edges of this problem,
but given the current state of things, it'll be a while yet as a lot of the
building pieces doesn't even seem like it's in the picture yet.

~~~
TD-Linux
The lineage is hardly an excuse - PC started out just like ARM is, with manual
IRQ assignment, no hw detection, and the like. But this was solved in the
mid-90s with plug and play standards. It's really sad that in 2015, embedded
SoCs still don't have anything comparable.

~~~
akhilcacharya
Wouldn't that considerably increase bloat?

~~~
TD-Linux
Dynamic module loading means that it would only increase disk usage. You could
always package the modules separately if disk space comes at a premium.

~~~
astrobe_
> Dynamic module loading means that it would only increase disk usage.

Disk? You mean, mass storage?

Hardware detection is definitely bloat on this kind of system, because there
isn't much variety to begin with.

PnP was created to let people less ans less RTFM-inclined to install new
hardware on an platform that won its market share because of its
extensibility.

SOCs and SBCs OTOH are generally used in closed embedded systems that are
often not at all designed for evolution. Using auto-detection would be a waste
of resources and potentially cause problems.

------
Sanddancer
Because people are trying to use a hammer on a part that needs a precision
screwdriver. A generic OS with a monolithic, non-realtime kernel offers
downright terrible support for the useful and interesting parts of embedded
components -- the multitude of I/O components designed to keep things from
needing the CPU, timers to trigger interrupts at regular intervals, so that it
can spend most of its time in sleep, etc.

Generic OSes simply have the wrong sort of philosophy for this.
Microcontrollers, like the AVR or ARM Cortex-Ms, tend to provide an
environment that gives you more tools needed to take advantage of the
processors in a reasonable manner. They provide the hooks for interrupts so
that you can service IO when it comes in, they provide network stacks and
filesystem libraries that you can use if your project calls for them, or
ignore when it doesn't. Because of this, you end up in situations where
programming for an ATTiny4313, with 256 bytes of RAM and 4 kilobytes of flash,
is more enjoyable and rewarding than a system with a million times those
resources.

A lot of this can be blamed on the documentation, or lack thereof. A lot of
the higher end embedded devices -- like the broadcom chip in the pi -- don't
have nearly the documentation available to the ordinary user as the smaller
chips. As a consequence, users have to pore through the tiny amount of
documentation that is available to guess their way to the answers, further
ensuring that you'll only get a few ports of operating systems that really
don't exploit the power of the chip they're running on. You just get a generic
experience with a generic OS.

The solution is to hack. Go deeper than Debian, farther than FreeBSD. Bug
manufacturers for the tools needed to expose the dark corners of the chip, to
get register maps and interrupt handlers. We need good real-time solutions
that the common person can count on. To get things not so ugly, we need the
opposite of the tools we have now. We need to lay these chips bare, because
the current path just isn't sustainable.

~~~
justaaron
amen

------
imrehg
I currently work at an embedded boards company as (technical) marketing, and
using a lot of the boards outside of work too. I'm asking the same things
myself too for a long time, and so far I figured about these comments to try
to answer (at least in part).

Getting software right is bloody hard work! Especially with ARM where you need
to redo a lot of things for every new piece of hardware.

Most companies (and most PMs/engineers) I think perceive boards as the end-
products themselves, and any software development after the initial release
(and maybe some bug fixes) is more of a burden than value add. This attitude
needs to change, because it results in very-very few boards actually being
used to their full potential. What good is a great hardware if nobody can make
it work? Maybe this will change, but need good thought leaders inside
companies to make that happen.

I see that upstreaming is often not considered, or couldn't be done. The
quality of code is just awful, because that's not a design goal. Being part of
an ecosystem, helping your future self doing a better job (not needing to
start from scratch every time if things are upstreamed) is not part of the
thinking for many. These things are (or thought to be) outside of the PMs
responsibilities.

Resource constraints come in a lot, many companies try to support way too many
products, and end up with a level of "barely" making it work, which is good
enough for many traditional customers. Doing a good job needs a lot more
resources. I remember reading that RPi spent about $5million worth of
development on just the Linux support. Can't imagine lot of other companies
putting that much into any single product.

And there's a lot of the traditional "trade secret" thinking. Lot of places
are more afraid of losing sales doe to being copied than not selling boards
because of lack of interest. The main goal is never really "enabling the
customer/user" or giving options, but the first thing is protecting the IP
because of the thinking stuck with the ways things use to work.

Also, the "software ecosystem" is highly fragmented, all projects rely a lot
on volunteers, and require a lot of specialized knowledge. I don't know if
it's even possible to bring people together, but whoever would achieve that
would do a big service to both sides...

These are just some thoughts, I'm sure not the whole picture (and definitely,
definitely do not reflect the opinion of my employer:)

~~~
chei0aiV
I don't know how much pull you have within your company and with the SoC/etc
suppliers that you use, but please please please preach the mainline mainline
mainline, upstream upstream upstream mantra.

~~~
imrehg
amen!

------
bsder
Um, because hardware is _hard_. Let's go shopping.

Specifically: Because until the latest incarnation of both the BeagleBone and
RasberryPi, everybody was running hacky kernels with bodge after bodge of
garbage layered on to make things work.

In roughly the last year, both the Raspberry Pi and the BeagleBone black can
run relatively clean versions derived from Linux mainline (Debian in my case).

Once the BeagleBone got off of the disaster that is Angstrom Linux, the number
of BeagleBone's around me shot up like a rocket.

------
mschuster91
I think something that's been overlooked is that no one really cares enough to
get Linux kernel patches upstream-compatible.

Even if the sources are available it 'd take many man-months of engineering
work to get those compatible with mainline HEAD and even more effort to get to
the coding standards required by the kernel maintainers.
Manufacturers/OEMs/ODMs don't care because it won't improve their bottom line
to have a current kernel (at least not until a customer wants to run latest
Debian with systemd and udev, which carry certain minimum requirements on the
kernel). The Linux kernel community already has too much work on their hands
and I don't see any major company sponsoring the couple millions of $$$ that
'd be required for integrating work.

Just look at the myriad of linux-2.6.24 forks. Android handsets, SOHO el-
cheapo routers (people still ship 2.4.x kernels for these, LOL), gambling
machines (no joke, I actually own a real, licensed slot machine running
2.6.24!)...

------
SwellJoe
What do you mean? They run Linux. How much more software could you possibly
want? And, how could it be healthier? It is the largest free software
ecosystem that has ever existed.

I'm not trying to be ornery, I just don't understand what is ugly about Linux,
or what is specifically ugly about using Linux on these systems?

~~~
striking
Perhaps the poster meant the numerous proprietary blobs and patches that can't
be upstreamed for certain platforms.

The RasPi fixed those problems, though.

~~~
chei0aiV
The RasPi still needs a GPU blob to boot right?

~~~
Narishma
Yes, the equivalent of the BIOS on PCs.

------
digi_owl
Because they grew out of the microcontroller business, rather than the
micro/personal computer business.

Thus most of the companies involved has a "ship it and forget it" attitude
towards their products.

The PC is a very odd duck. IBM was a late comer to the micro/personal market,
and they did so using off the shelf components (except for the chip handling
the initial bootup, better known as the BIOS).

Thus it was possible for other companies to clone the PC using the same
components, and at the time it was possible to do a clean room
reimplementation of the BIOS to get around any copyright claims (back then
there were no software/code patents).

So once those BIOS reimplementations started shipping, the PC market exploded
with competition. This in turn drove prices down.

Another thing is that the IBM design was in a sense a throwback to a earlier
"era". While most micro computers sold were pretty much single board computers
(possibly with a few expansion ports and a single edge connector for ROM
cartridges) the PC was more like the Altair 8800. Except for CPU and RAM,
everything lived on ISA bus expansion boards.

Thus you had a initial flexibility that very much stayed with it to this day
(and was massively improved when the ISA bus got replaced by the PCI bus).

------
nascentmind
I see a lot of people having problems with the BSP. Excuse me for the
shameless plug but I am developing an open source baremetal firmware for the
Samsung SoC S3C2440 (
[https://github.com/mindentropy/s3c2440-mdk](https://github.com/mindentropy/s3c2440-mdk)
) and planning to port this to S3C2451. These are found in HP IPaq etc. It
will be basically a tutorial for people trying to develop their own firmware
or as reference code for bringing the SoC and the controller's up i.e. Drivers
for the controllers and also the ARM board bring up code. It can also be used
to test the board by testing individual IP's.

I am planning on supporting Samsung SoC's and will start of with TI AM335x
Sitara series and FriendlyARM's NanoPi2 which contains the Samsung SoC S5P4418

I am running behind OEM/ODM's for funding me for development of their SoC's
and nobody seems to be interested except FriendlyARM. What do you think I
should be doing to get some funding for this?

~~~
lifeisstillgood
Wow - keep publishing would be my advice. Build an audience - eventually that
will drive a critical mass.

~~~
nascentmind
Thanks! I also have a blog (
[http://thesoulofamachine.blogspot.in/](http://thesoulofamachine.blogspot.in/)
) where I go in detail explaining parts of the code of the various subsystems
to get things up and running. I have developed without any costly JTAG tools
and the code is compiled using free GNU tools keeping learners in mind who
cannot afford costly development tools although a good scope would be great to
debug clocks.

------
duskwuff
It's not just those boards. Embedded development is generally "ugly" across
the board. :(

~~~
quanticle
That was my thought. If anything the software ecosystem around Arduinos and
Raspberry Pis is much improved compared to what came before (e.g. PICs).

Partially it comes down to the constraints you're working under. When
everything, program, data, and all has to fit in a few K of RAM, you use
"ugly" hacks to make sure that everything fits, while leaving you the maximum
amount of space needed for your data. While modern embedded system'
capabilities have grown to the point where those practices aren't needed any
more, the software development practices for those systems are slower to
change.

~~~
tmuir
I think another big reason is that, compared to desktop and web based
software, embedded applications are almost always completely custom. There are
common hardware specs on every PC, and three big operating systems. So you end
up having huge developer communities that build all kinds of utilities, tools,
and other building blocks that you can easily reuse. Usually embedded systems
have a fairly simple operating system, and custom hardware, which means there
isn't much reusable code, and it's up to the developer to provide a much
larger portion of the functionality.

------
tmuir
It's to the point that hardware features on their own aren't really selling
points, at least in my opinion. You can have high speed, tons of ram, 10
different buses/ports, and sell it for $10. But if there isn't a healthy user
community, and a good bsp, all of that hardware remains highly inaccessible.

It's kind of a roller coaster of emotions. You can go from feeling like a
wizard because of all of the shoulders you stand on for very little effort, to
feeling like a dunce, because some driver doesn't work, and you have no idea
how to go about implementing a solution.

------
afsina
Google wants to bring some sanity on the development side with Fletch
([https://github.com/dart-lang/fletch](https://github.com/dart-lang/fletch) ,
Dart for embedded devices), not sure if this is what you are looking for
though.

[https://www.youtube.com/watch?v=Hx2iGEAvZRk](https://www.youtube.com/watch?v=Hx2iGEAvZRk)

------
hexman
At the latest DockerCon Solomon told about solution - look
[https://youtu.be/at72dhg-SZY?t=1445](https://youtu.be/at72dhg-SZY?t=1445)

~~~
subway
I'm having trouble finding the 'solution' you refer to in the video. Mention
is made of working on an abstraction layer, but that does nothing to solve the
mess that is embedded development.

Lipstick on a pig -- current embedded development ecosystems are fundamentally
broken, and can't be fixed by just another abstraction.

------
stefantalpalaru
The kernel space is ugly because of all those out-of-tree patches and binary
blobs, but the user space is rather normal. I run a vanilla Gentoo ~arm on my
Banana Pro and I have yet to encounter an issue.

