- runtime/space complexity and general algorithmic heuristics to know how the system will perform, or to be able to fix scale issues in the code.
- network characteristic (eg fallacies of distributed computing)
- CAP theorem and applying it to real world designs
- A co-ordination tool like zookeeper
- Understanding of the technological lay of the land (eg compression like snappy, different tools and approaches for communication like zeroMq, kafka
- Metrics and alerting - eg get how to instrument with statsd and know how and where to alert.
- Concurrency concerns and scheduling - you can get away with understanding futures and the actor model and can steer away from threading fortunately. (eg elixir/OTP or Akka)
Embedded programming is all about forgetting all the really complicated algorithms you might have learned in school, because they usually don't matter, and when they do it's more important to be able to gather performance data than it is to do something fancy.
Traditional concurrency is still a very real concern because DMA engines, specialized coprocessors like the TI N2HET, and offload engines like modern audio codecs all have their own internal firmware that your system must interact with. Even getting the system to boot and to transition to low power states requires you to understand clock trees, clock gating, power supply states, and how to interact with any PMICs. Getting a real-time clock working is a similar story.
Raspberry Pis are good for dipping your feet into a Linux system that uses a specialized bootloader, but if you want to do anything truly embedded you're going to have to go deeper than that and work with a system like the PIC32 or even the RTUs on the Beaglebone Black.
If you're not using a JTAG at least occasionally then you're probably not close enough to the hardware to be considered "embedded."