Hacker News new | past | comments | ask | show | jobs | submit login

Background: I used to be an automotive software engineer. I speak for myself only here.

Whenever the topic of automotive software comes up on HN there are comments alongs the lines of "global variables bad", but not much construtive feedback.

I want to explain some of the tradeoffs that lead to the architecture used in automotive software to get a better discussion going with HN readers about that architecture.

tl;dr Given the hardware restrictions, real-time requirements and measurement capabilities required in automotive software, shared global variables without locks is a fast and safe way to share state between different software components as long as each variable is only written in one place in the program.

The microprocessor has to run for 10+ years in a wide range of temperatures and be dirt cheap, so you end up with specs like 180 MHz, 4 MB of flash and 128 KB of RAM.

The program must run deterministicly with respect to memory. There is no malloc/new in the code. All variables are statically allocated.

Because the physical world doesn't pause, no code is allowed to block while waiting for resources, especially synchronization primitives like mutexes.

The software architecture is in 2 main parts: basic software containing the real-time OS and hardware drivers, and the application layer which has the domain-specific code for controlling the engine, brakes, etc.

The basic software is implemented using usual C programming techniques. It has an API provided by function calls and structs to hide implementation details of each microcontroller.

The application software is where the programming model is different.

To understand why, you need to know where automotive software comes from and what it is trying to acheive.

Originally all controllers were mechanical: a valve opens proportionally to the vacuum in a part of the system. Then some controllers were implemented in analog electronics: take multiple voltages, feed them through an op-amp and use the output to control a valve.

So automotive software reproduces this: get some inputs, compute the same physical equations at a regular rate and generate outputs.

This is dataflow programming. Blocks of code have inputs and outputs. They are executed at a fixed rate that depends on the physical phenomena (air flow changes fast, temperature changes slowly). Different blocks are conneceted together in a hierachical way to form subsystems. Encapsulation is acheived by viewing these blocks as black boxes: you don't need to care how the block works if you are only interested in knowing which inputs it uses and outputs it produces.

Here's an example component to control a gizmo.

It might be implemented in a visual environment like Simulink by MathWorks, or it implemented by hand from a spec.

  #include "GizmoController_data.h"
  
  void GizmoController_100ms() {
    Gizmo_Gain = interpolate2d(Gizmo_Gain_MAP, EngineSpeed, CoolantTemp);
  }
  
  void GizmoController_10ms() {
    Gizmo_Error = Gizmo_PositionDesired - Gizmo_Position;
    Gizmo_DutyCycle = limit(Gizmo_Gain * Gizmo_Error + Gizmo_Offset_VAL, 0, 100);
  }
It takes some inputs (EngineSpeed, CoolantTemp, Gizmo_PositionDesired, Gizmo_Position), has some intermediate values (Gizmo_Error), and outputs (Gizmo_DutyCycle). Those are implemented as global variables. It also uses some constants (Gizmo_Gain_MAP, Gizmo_Offset_VAL). It has 2 processes, running every 100ms and 10ms. All this information would be specified in an XML file.

The header GizmoController_data.h is auto-generated at compile time by a tool from the XML file mentioned above. It will contain global variable definitions for the inputs, intermediates and outputs with the appropriate volatile, const, static and extern storage classes/type qualifiers. This ensures that the compiler will enforce that inputs can't be written to, intermediate values are private to the component and outputs can be read by other modules.

Note that no explicit synchronization is needed to access inter-process variables like Gizmo_Gain or inter-component variables like Gizmo_Position. It's shared memory between 2 processes scheduled in OS tasks that can potentially interrupt each other, but since the write is atomic and happens only in one place, there is no data race. This is huge! Concurrent programming, without locks, with the best efficiency possible, using a simple technique anybody can understand: only one place in the program is allowed to write to any global memory location.

Calibration is another aspect of automotive software. In most software the constants either never change or can be set in some kind of configuration file. For an automotive controller, the value of constants (gains, offsets, limits, etc) depend on the vehicle so they must be configurable at run time during development. This is implemented in the C code by putting all constants in a memory area that is ROM in production units, but RAM in development units. The compiler enforces that application software cannot change constants, but the basic software includes code so that constants can be changed from the outside in development. This process is called calibration and is done by calibration engineers who are usually not the ones who wrote the software. Note that calibration can drastically affect the behavior of the software. What would happen if Gizmo_Gain_MAP is set to all zeros?

Measurement of variables is essential to understanding what's going on inside the embedded controller. Having all that state available in global variables makes it possible for the calibration tool request the value of any variable in the software at a fixed rate and display it in a virtual oscilloscope.

The measurement and calibration tool needs to know how to access the variables and constants. It uses a file that maps from names to addresses for a particular version of software. That file can easily be generated a compile time since all allocations are static.

Going back to the architecture of the application software, let's look at where our gizmo controller fits. It is not the only component needed to make the gizmo work. You also need components to calculate the gizmo position from some external signal (let's say an analog voltage), to route the output signal to the powerstage driver on the PCB, to determine which position the gizmo should currently occupy. These would form the gizmo subsystem package.

When the supplier releases gizmo 2.0 (TM) they upgrade the input signal to be a PWM input instead of an analog input. Modularity in the software allows the software team to simply replace the gizmo position component with one that reads a PWM instead of an analog voltage and keep the rest of the gizmo subsystem the same. In the future, projects that use gizmo 1.0 use one version of the gizmo subsystem and projects that use 2.0 use another.

This is true at any level in the hierarchy: as long as the inputs and outputs are the same, a component or subystem package can be replaced by another.

Version control in automotive software reflects this. Instead of having one tree of versions and releases like a typical software project, each component, subsystem package and software project has its own tree of versions. Each software project will reference the subsystem packages required for their engine type, vehicle platform, sensors and actuators, etc. This is how code reuse is acheived.

Testing is a mix of simulation (the sensor/actuator is simulated in Simulink and connected to Simulink block diagram of the software component), hardware-in-the-loop (a computer simulates the vehicle, but the real electronic control unit is used) and vehicle testing.

Thanks for reading. I hope this improves your understanding of how automotive software is structured.

I'm hoping the discussion will bring examples from other fields like robotics, drones and aeronautics that have similar real-time requirements on how they architect their software.




So you can use global variables and the like, but is there any progress in going higher level than C in industry? In which case you use global variables and whatnot, but you have static assertions about things like "this is only written to in one spot"

For example, I know there's an embedded systems company that writes everything in Haskell, and running their Haskell program actually generates "safe" C code that they compile down to their chips.

My impression from the oft-cited Toyota report was not only that there was a lot of global variable stuff, but that these "write-once" principles weren't super respected.


Without going into specifics, there was a static analysis tool that was part of the toolchain that caught multiple writes in different modules or in tasks of different priorities. It was a build error unless allowed on a case-by-case basis.


Testing also included very expensive (SW license cost, now HW), cycle accurate simulators at Delphi. ASIC specs were implemented into C and hooked into the simulator. Prototype firmware was compiled and run on the CPU simulator and "board" (netlist of simulated ASICs). This helped discover bugs/spec deficiencies before ASIC tape-out (save $$$). Of course, it was all wrapped in a Tk "dashboard/test unit" GUI for the other teams to consume. Tk was actually quite a pleasure to use, especially when interfacing with C!

EDIT: It basically provided at least two implementations for an ASIC by independent teams. The ASIC guys would implement the real deal in an HDL and test it. However, full system tests where the HDL simulation is hooked up with CPU simulation were too slow. By implementing the same spec in C, you gain simulation performance and get a second implementation and set of eyes that can help find bugs.


Quality post. Thanks for sharing.

If some of these timers are implemented as separate OS tasks that can interrupt each others, what's to stop one from taking a bit too much time and throwing off the timing of another?


You're welcome.

There is only one OS task for each rate (10ms, 100ms, etc). The order of the function calls in each task is determined at compile time while respecting constraints (A_10ms must run before B_10ms because B_10ms uses values computed in A_10ms).

In addition each OS task has an overrun monitor that triggers if a task is scheduled to run but the previous run has not finished yet. This type of error typically sets a "control module performance" diagnostic trouble code.


I'm not an embedded developer, so I don't have much value to add re: architecture; however, this caught my eye:

> The microprocessor has to run for 10+ years in a wide range of temperatures and be dirt cheap, so you end up with specs like 180 MHz, 4 MB of flash and 128 KB of RAM.

If I'm paying tens of thousands of dollars for a car, how come they're using the cheapest possible components? If tinkerers can ship the Raspberry Pi for $30 per board, inc. a 900Mhz quad-core chip and 1GB RAM, you'd think GM could get components at least that modern for an insignificant cost relative to the car they are controlling.


This is because the parts are orders of magnitude more expensive when being put in harsh environments

These "really slow" parts are actually tested and built for much more extreme conditions. For microchips that go into satelites, you even have hand-checked chips that go through a very long (and costly compared to something like a Pi) testing process. Multiply this by all the eletrical components, and you got yourself a lot of things to check.

Put your Raspberry Pi next to a car motor, and it's pretty likely(1) a part will fail in the heat and grime conditions.

(1) actually, I'm not sure about the likelihood, but there's no assurance that it will be fine


> Put your Raspberry Pi next to a car motor, and it's pretty likely() a part will fail in the heat and grime conditions.

Depending on how hot your motor goes, there's a good chance that unshielded Pi will simply de-solder itself into pieces.


Fun harsh environment: I reviewed software a few weeks ago that ran on one of our boards while it was irradiated by a neutron source (I work in aerospace). Radiation has interesting effects on electronics :).


$30 Raspberry Pi (2) reboots itself when exposed to strong light. The RPi line is also totally unreliable, it tends to bug up and/or crash after running for too long.

The car components are made to different spec than consumer electronics (for starters, there are several types of component, depending on what "range of temperatures" you need to operate them in; e.g. the stuff that goes into the traffic lights in northern parts of Europe is not something you buy off Farnell for your hobby project), and they also need to be reliable and tested. Add, of course, the usual graft and overhead.

Another thing is that a engine controller doesn't need a near-GHz quad-core chip and shit ton of RAM. It's not meant to stream HD videos or run Python, it has to run a bunch of feedback loops fast enough. Less powerful chips tend to be cheaper and more reliable.


All good points.

The CPU, Flash and RAM share the silicon die with lots of specialized peripherals like CAN controllers and hardware timer units. There's a lot more on the chip than just a CPU.

Also, cost is king in automotive...


If I'm paying tens of thousands of dollars for a car, how come they're using the cheapest possible components?

Save a dollar on the part, and you make a million cars, how much does that add up to? Or conversely, why spend more than you have to if the quoted specs do the job? What the hell am I going to do with four cores if I follow the well-written description above?

And I'd give your Raspberry Pi a lifespan of about a month if you were to strap it to the firewall of your car while you drive it. Less if you actually powered it up.


You'd do it because the million dollars in parts is peanuts compared to the combined salaries of the engineers who are paid to have the specialized knowledge to do this, and the time it to bring up a new application/car. I know that a million dollars is lowball, but this same principle can be applied to other components and scaled up (to a point).

Potting electronics in epoxy isn't terrifically difficult, and would greatly increase the durability. I'm willing to bet that a potted Beaglebone Black (for its real-time subsystem) would do just fine in my engine bay for a very long time. The hard part would be getting a single large-pin-count plug coming out of it to handle all the inputs.

This point is actually kind of moot because for a long time the ECUs of cars were located in the passenger cabin with wires fed in through a grommet in the firewall.

I'd wager that the cost of replacing this system with some standard real-time platform is so monumental, it won't ever be done. Good luck getting the automotive equivalent of BSPs (board-support packages) running on any system other than this.

Where this gets interesting is when the people who actually know this system die start retiring/exiting the market. There will be a pretty strong incentive to 1) keep reusing the same system with no new development, 2) accept the lead time in training students/etc, or 3) start paying crazy salaries for people to go out of their way to learn it.


Cars are life-critical systems. If GM half-assed it by putting...

> a potted Beaglebone Black (for its real-time subsystem)

...into an actual production car, what happens when the thing breaks down and kills someone? Death, recalls, lawsuits, and massive brand damage.

> This point is actually kind of moot because for a long time the ECUs of cars were located in the passenger cabin with wires fed in through a grommet in the firewall.

Even in the passenger compartment, cars aren't exactly friendly environments for electronics. They have large temperature swings (from a controlled climate to very hot in summer or very cold in winter). So putting it in the passenger compartment doesn't necessarily moot concerns over withstanding extreme environments.


The first point was meant to illustrate that it's not good enough to say that it's okay to not innovate because there's no better hardware available. I'm not suggesting that GM would seriously put a potted development board inside a vehicle. TI can simply produce automotive-grade equivalents for them.

What I'm suggesting is that things like higher sampling rates of input sensors and more composability and reusability can be realized by a newer, more standardized and open approach to hardware.

I would also wager that the outcome of failure would be the same as with the traditional systems. GM shifts the blame onto the ECU development shop that it contracted out to make the hardware or software, or the QE shop that failed to pick up the failure.

Vehicle electronics are typically rated from -40C to 125C, although it's seen that they routinely fail even inside these margins. The BBB for example can have an operating temperature from -40C to 85C. There are also plenty of industrial boards that meet or exceed the requirements of automotive operating environments. Companies such as TI are (in part) releasing development boards (such as the BBB) to try to inspire innovation in these environments. The dev boards themselves aren't rated for it, but the platform can be.


There's a few things at play here.

Economy of scales is huge of course. A lot of the parts are actually shared between many different cars at different price points. There can be on the order of 50+ micro-controller in that car. Do you want to pay 10$ extra for each of them just because they have cooler specs?

The various parts are designed at various times so that car might have controllers for the anti-lock braking system that were designed 5 years ago.

Hard real-time applications (brakes, ECU, etc.) don't magically need more gigahertz, that engine with direct injection only needs to calculate fuel volume maybe 20000 times per seconds (once per cylinder per 2 rpm) and that would probably be a ridiculous upper bound.

In-car navigation and entertainment packages are completely separate from the hardware actually controlling the car.


You'd be surprised at how much "good" software can accomplish. Just think of an 80mhz embedded cpu. You have 80,000,000 calculations per second right there in your hand.

In a side project I was involved in. We were using 60mhz chips to modulate data at 1khz. I kept worrying about running out of cpu cycles, I mean after all, I was doing a few thousand things per second. But in reality, I still had a ton of head room to go :)


Thanks for the post. Very good read.

When coming to work from university (1989-90), I made a somewhat similar though more primitive design for the embedded software of a redundancy controller of a satellite modem. The various tasks (control of RF and baseband electronics, display and buttons UI, network management interface, etc) would run according to a scheduler, each in turn. No dynamic memory allocation, internal communication via global variables that are written only in one place. There was no mechanism to actually enforce this, it was just the principle in code, and the enforcement was possible because it was a one-man software project so I had no one else to blame.

This was written in PL/M (which looks like Pascal but is more similar to a primitive variant of C) on a 8031 microcontroller, from scratch (no 3rd party SW components at all - that was a long ago).

Later on, I used the same platform for a design we did on a telephone/data cable modem we designed for domestic use (We saw a need for internet enabled cable modem in 1992 and started work on a thing that provided POTS and 128 kbit/s over SLIP). That was also running a 8031, but as we wanted to make it dirt cheap, the board did not have dual-port RAM. The hardware wizards made it run with a hack on the regular RAM access so that the SW could only access RAM if it read any memory location twice: first read would just place the address bits on the bus, and doing another read would actually fetch the content. This enabled us to save several dollars per device.

(This was possible because the 8031 has an architecture which is not strictly von Neumann design: the code and address spaces are different memory spaces and you use different CPU instructions to access the different memories, so your program memory on EPROM works normally but your data memory in external RAM does not. There's also a small internal RAM in yet another overlapping address space which was not impacted by address bus weirdness.)

This is just to show what kind of strange limitations you sometimes have on the hardware in embedded devices. Unlimited virtual memory, large non-volatile storage instead of a puny i2C bus EEPROM... you just dreamed of those. Modern controllers with 4G flash are really huge compared to that, but they are still far below what people have on desktop computers, or even an Android phone.

The limitations in embedded devices indeed come from cost per device as well as heat, vibration, radiation, humidity and similar environmental factors.


Thanks for the insight, I often wonder what automotive software and its development looks like.


I found this extremely demystifying.


I'm working with embedded on a similar industry, and also really enjoy web dev. I don't like the tools and edit-compile-cycle when working with small microcontrollers...

May I ask why you left the industry?


The "closed by default" attitude drove me nuts. Teams develop tools to improve their work and refuse to share the source code, even internally, even when there is a business reason (I'll improve the tool for you and me). Forget about cross-company collaboration on open source like you see in web dev.




Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: