Made me wonder if it would make sense on these SoC targets to dynamically load a program from external storage and write into internal flash for execution, then start running it without a reboot.
Since such flashes are typically only erasable in pretty big blocks ("sectors") it would require some interesting design choices, but it should be doable. :)
Edit: Hot-patching in flash is totally possible as well.
When it was online, we'd connect to it over Ethernet and send it firmware updates that it would burn to an external SPI flash chip (we used it for testing communications protocols with a wide variety of products, and it needed frequent firmware updates). If one of these updates had a bug and bricked the controller, it was a hassle (not to mention downtime!) to remove it from the machines on the production floor and bring it back to engineering to plug in the dedicated programming device.
So we set up the SPI flash to store two copies of the firmware. If it went down, you could reset it while holding down a "restore last-known-good firmware" button, and the boot routine would reflash the on-board flash memory with the previous copy of the firmware from the SPI flash.
I wouldn't do this every time you needed to power cycle it - but on a per-revision level it was very useful!
Presuming "killed" means "won't accept new updates but will read" (?), you could then continue to use the chip.
There is less reason for this today because high-capacity, high write-count storage is cheap. But I have seen it done, e.g. in order to retrofit redundant firmware update schemes on devices that do not originally support it. A long time ago I looked into something like this for one of the devices I was working on at the time (the firmware was stored in the internal flash of the MCU, but was steadily growing, and we'd eventually run out of space to hold two copies; however, we also had a larger, cheaper, slower external flash that we could use).
The write cycles cap of flash devices can make it a bad idea under some scenarios, but that's a decision you make on a case-by-case basis. ZOMG THIS IS FLASH DO NOT REWRITE IT is not exactly a clever design solution.
It was the idea of treating on-board flash as a load-target for temporary programs that felt fun.
What I really want is a DIP version of the PIC32 with enough RAM and flash to run this or the 4.4-based LiteBSD by the same people.
 - https://github.com/sergev/LiteBSD
For most compilers: xtensa-esp32-elf will work, so long as the ESP-IDF toolchain is available.
IMO I'd generally want OOB management kit to be Very Very™ secure, and especially if it lets me near any secure boot keys. In any case this kind of thing will categorically have the ability to reboot the system, so if I can tinker with configuration I definitely want to have some degree of confidence in the auth process etc.
Completely outside the scope of a microcontroller, what would a reasonable 10-year target for video capture be within a "server console/admin" context? 1080p? 2K? DisplayPort? HDMI? (Obviously VGA)
I'm guessing HDMI and maybe 2K. Hm.
They're actually probably pretty well known for most things, just buried under decades of literature and commits.
Starting off with a 4BSD would probably be better though since 32bit vax bsd unix got a lot more traction (and more direct code lineage to the modern ports).
But yes, if going 32bit with enough ram, might as well just port OpenBSD/NetBSD and get on with it.
I know that modern FreeBSD can boot to a single user process with something like 32M RAM on 32-bit systems with a very stripped down kernel configuration. 64M gives a little more breathing room. But those are both many orders of magnitude larger than 128kB.
It wouldn't surprise me if OBSD or NBSD can fit in a little less RAM, but it also wouldn't surprise me if they still need at least a handful of MB, i.e., 32-64x Retro's 128 kB.
Wow! My first BSD box (386BSD) had 8 megabytes of RAM and that was plenty! You could even compile moderately large programs without any swapping. But then, feeping creaturism has always been a BSD tradition! :)
In general, living software adapts to the constraints of the hardware developers use and care about. The most visible case of this is probably web browsers (and web sites).
If BSD developers were interested in a minimal memory configuration, they could make it happen. It just hasn't been anyone's priority, and machines have many orders of magnitude more RAM today than they did in ~1992.
That being said, software development in general is in a desolate state these days with unnecessary layers of abstraction and bloat all over the place. The featurism of ancient BSD looks pretty harmless today.
Your other question is also a good one. I couldn’t hazard a guess what the next 10 years for video would be. I would guess display port in some format, but VGA is still kicking around a lot. For *nix serial still works quite well.
There are HDMI over cat5/6 cable that are basically point to point and are basically just replacing the cable to transmit a longer distance.
The more expensive products that claim to translate HDMI into IP and send over an ethernet network. I'm not sure if they are just turning into ethernet packets and only work on a LAN or if they turn into routable IP packets.
- HDMI-over-IP-over-Ethernet-over-cat5 (best case)
- HDMI-over-cat5 (worst case)
If Ethernet (layer 2) isn't somewhere in there, you're not going to be able to set up a layer 2 network for it.
If you just have a passive thing that translates HDMI connectors to RJ-45 connectors, Ethernet won't help you. But if you use Ethernet, you can also use IP. That is all.
There is no need to dynamically load any code on these type of devices, you would generally just use eXecute-In-Place (XIP).
The type of applications you develop for these devices simply do not require UNIX like features (multi-user, etc).
Many more appropriate OS'es exist for this to be taken seriously.
There's lots of outrageous stuff inside today's consumer products. What makes you so sure that this won't be used in a real (whatever that means) IoT or IIoT device by someone?
For example... it can work in 128KB RAM. If you're using 128KB just to run the OS, thats completely wasted RAM that could be used by your application. This really counts when you're making 100'000 of these devices and are paying for every byte.
You don't waste valuable resources on desktop-level abstractions. That would make for a very expensive product.