Embedded systems, IMO, must be deterministic, reliable and consistent. Introducing garbage collection violates these three principals. Without them, how can you guarantee an interrupt can be reliably serviced in time? How can you guarantee that memory growth won't be exhausted because of some unexpected condition which prevents a timely GC? Many embedded systems developers don't even use malloc() in lieu of static allocations so they can actually understand their memory requirements.
It's either big enough for Linux, in which case have at it, or you need to reconsider why you're down in the kilobytes of total memory with a garbage collector.
I think the BEAM approach could be very attractive for embedded systems, given the right investment.
You certainly need to spend some time figuring out all the data types and how to access them in C, but if you're willing to spend some time, and you're capable of mucking about inside a VM; I don't see how it's that hard to understand. It feels to me to be on about the same level as the FreeBSD kernel; after chasing down surprising behavior enough times, I've got a pretty good feel for how to read the code, and whereabouts to start looking for code I'd like to read; but making changes can be a stretch, depending on where it needs to happen. OTOH, I only have to dive into the depths when my team manages to break beam or the kernel, which isn't everyday... If more things broke, I'd have more skill here. ;)
I've written a bunch of Erlang, and a bit of Linux Kernel bits.
You're right, rarely do I break BEAM because most of what I'm able to break is in Erlang. The only times when this is untrue is performance. If you spend a lot of your day hacking on BEAM, I'd love some more documentation like the BEAM handbook, if you're interested.
Remember all those home computers from seventies and eighties, such as C64, Apple II, MSX compatibles and Spectrum? Almost all of them were running "garbage collected" BASIC interpreters.
I don't think blanket statements are justified. There are a lot of different types of embedded systems.
> Without them, how can you guarantee an interrupt can be reliably serviced in time?
You wouldn't allocate memory in IRQ service routine in the first place, GC or not. GC, dynamic (malloc) and static system would all take exactly as long to service an interrupt.
GC can also be a subset of the system, where less time critical functionality is running.
That's not to say embedded systems should do allocation at runtime. It's often reasonable to avoid it. But perhaps not all the time.
Many BASICs had REDIM for resizing arrays on the go.
J2ME would like to have a word with you...
As another example, MicroEJ is targeted at Cortex-M CPUs.
And there is also Astrobe selling Oberon-07 compilers for M3 and M4 processors.
> 32-bit MCU, 130 KB RAM, 350 KB Flash/ROM
Anything less than this is most certainly some kind of PIC, not everyone is using them for embedded real-time deployments.
You are wrong about "anything less"
Anything less is probably 90% of market. You will find 130kB RAM, 350kB flash only in high end products from NXP(LCP family), Kinetis(KL family) or STI( STM family). You only need so much RAM for JAVA :)
EDIT to clear up two sub-conversations here: what Aonix pulled off for hard-real-time Java; that there's also hard-real-time GC's for embedded. Example of latter.
However I don't know what they have done with the PicoJava offerings, since PTC isn't as friendly as Aonix regarding the making the documentation available.
As for the market, it depends on which one the products built with those CPUs are actually being sold to.
My Cisco phone and the Ricoh printer around the corner both are running some form of embedded Java.
Maybe you think they are part of the remaining 10%, however Cisco and Ricoh though it was worthwhile for their sales.
The 32-bit market was at $6 billion by 2014 per Amtel's report. There's also a huge amount of sales for Windows Embedded and embedded Linux's. That represents a significant chunk of a massive market. So, it's quite worthwhile to call even a 32-bit-targeted, hard-real-time GC useful for "embedded" systems. As he said, it's part of the standard definition used in the embedded sector. The 32-bit side is going up rapidly, too, due to decreasing cost and power usage.
EDIT: The specs on them are also starting to look like the desktops of old. Actually, started to do that quite a while ago.
The market goes all the way from something like the PIC10 to something ARM v8 64 bit.
It is all a matter of what a company is targeting as customer market, and how much it is willing to spend.
Just because a given language is not able to target 100% of the market, it doesn't make it invalid to such market.
If that was the case, C wouldn't be a valid language for embedded development as well, given that many CPU/uC aren't able to cope with straigh ANSI C compliant compilers and require either a C dialect or pure Assembly.
Believe it or not, not every embedded application has a hard real-time requirement.
You're just moving the OP's goalposts to make a point, here.
I'd say that's a bit myopic.
There's a huge range of devices between "a few kilobytes of memory" and "smartphone" that would be well-served by something like this.
Buy to your point there is no magic answer. The question is rather: when is the capabilities of a full OS kernel like Linux worth the resources needed to run it? And the answer is ultimately: it depends.
rd with you.
As the tooling improves I can definitely see it being a good language to use on embedded devices. And it's a rather fun language too; I certainly find it more pleasant than Go to write (but I seem to be in the minority, considering the popularity of Go lately).
When I get time (eventually) I'm going to be working on our own minimal Linux system for the devices. Really all I want is a device that can be accessed from behind firewalls (looking at Teleport for this with their new ARM support), and the rest can be compiled Rust binaries using upstart or somesuch :)
We do have some upcoming projects where I might get a chance to try writing stuff without an operating system. That'll be an interesting challenge!
Also, the "acceptable defect rate" is not necessarily very large for a majority of cases that users will care about.
Theres nothing like it out there. This is zero abstraction, and it works.
I wish there was an armv7R or armv8R dev board around (that doesn't cost thousands) because those are actually meant for realtime applications and I would really like to try GERT on one.
This is the definition of a hard real-time system. In most of the literature, 'embedded system' is a broader term that just means there is some compute embedded in a device that performs a larger task.
TI Sitara (Beaglebone family) does this via the PRU, and Freescale added a Cortex-M4 to the i.MX 6SoloX for a similar purpose.
Also embedded real time JVMs fit in a few hundred KB and are being used by the likes of military, e.g. Aonix picoJVM, to control real time stuff like battleship missile tracking systems, which I assume is quite real time.
Embedded systems range from the tiniest microcontroller up to multi-core xeons, DSPs and FPGAs.
Embedded != small.
The typical issues associated with embedded development are 1) cost and 2) response time (for real-time embedded systems).
The big one here is cost. If you're wasting a single byte in your code, you have to pay that cost in every single unit you make (e.g. millions).
Also I have a JVM running on the Cisco phone on my desk and the Ricoh laser printer down the hall.
Just, because there is a portion of the market that a certain concept doesn't apply, it doesn't mean it isn't viable in other segments of the same market.
For Go to be successful on embedded systems, doesn't mean it must run everywhere.
Heck there are even embedded CPUs that cannot cope with ANSI C, and that hasn't prevented people to make use of it on other market segments of the embedded space.
Far as GC's, it's also about size if it's a constrained embedded system. I've seen a number of GC papers discussing tradeoffs between size (i.e. RAM use) vs speed/latency. This even factors in a bit on the large ones like Vega where they were still balancing those factors to get an optimal, "pauseless" GC for accelerating enterprise apps.
I agree on the other points.
How do interrupts work in this system? Do they map to goroutines, or what?
There are a number of commercial real time JVM's out there.
Heavily used by the military, including weapons control systems.
Another well known ones are IBM WebSphere RealTime and JamaicaVM.
Also companies like Gemalto, Ricoh, Cisco have JVMs on their devices, but not real time.
I did find an announcement:
In the Aonix design, there is no GC on the hard-real-time threads or portions of the heap. Those are usually done statically anyway for max predictability. The GC can apply to anything else with it preempted by real-time threads when necessary. It was usually tied into a RTOS like LynuxWork's (now Lynx).
I still refer to Aonix, because they were more developer friendly, had more information on their website than the few whitepapers from PTC and the web site is still partially up.
Is there a formal definition for all three terms?
* Deterministic - the system is intrinsically incapable of undefinable behavior, provably so. (Though extrinsic factors like hardware or network failure could result in undefinable behavior).
* Consistent - Every read receives the most recent write or an error (from CAP Theorem https://en.wikipedia.org/wiki/CAP_theorem)
* Reliable - ?
Of course, in the end, it doesn't matter as much as people make it out to, because you can easily blow the stack in C. In reality, one of the worst disadvantages of garbage collectors is latency, and Go's GC is best-in-class in that respect.
And, obviously, while Go is pretty competitive in memory usage to many higher level languages, in my experience you can still be much, much more frugal on memory when coding in C.
Would it be possible for you to make a program that just exists and then time the whole bootup process? Thank you.
I have a specific use-case and would be willing to buy the board if it is fast enough.
Is the minimal set of OS primitives that Go relies on documented anywhere?
If you're referring to ARM servers, then things are still pretty solid (it takes a while to line up an entire hardware and software ecosystem, even in a world where you're all set if it runs Linux). There are specs like SBSA and SBBR that ensure servers from any SoC vendor look roughly the same, but I would wonder why you would target bare metal in that case anyway. Have you considered targeting ARMv8 VMs, like the one modeled by KVM/qemu? Extra bonus in that it looks like an ARM server.
Freescale's support is probably the best available out there in this class of chips. Documentation is mature and plentiful (excepting the GPU of course but that's being worked around), and there is plenty of code sitting on their Github servers including Yocto recipes that are pretty close to mainline.
It looks quite interesting.