Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like Linux and used it for over 22 years now, I did some kernel development. I worked for Samsung on TizenOS and I had some discussion with Intel on IVI (their offshoot for automotive). I then went to work for Intel. In my opinion one thing Linux is never going to be is "automotive grade".

There is conflicting set of requirements when developing Linux and in the end it is somewhere on the spectrum of compromise between performance, security, stability, features and development cost.

And I think this is right, Linux is real software for real world and it would be wrong and wasteful for general purpose OS to prioritize one requirement to the exclusion of others.

Unfortunately, an automotive grade software absolutely requires focus on stability and you just can't get it if you also want performance, features and want to do it cheap.

I am also of the opinion that Linux is just way too complex to ever be stable enough for controlling your car. Think about Android. You get an OS tightly managed by Google, running on purpose-built hardware (I have Pixel 2 XL) and it still regularly crashes on me and does otherwise funky things. I have no illusions car industry is going to be able to do any better.

Now, I have nothing against using Linux for everything else in a car (infotainment, navigation, etc.) but I would not trust one that will have anything to do with basic controls.



As someone who develops mostly bare metal embedded systems, not everything that is simple is also golden and stable, especially compared to the Linux kernel.

Honestly I have never heard of a case where a OS too complex to be the cause of catastrophe. Embedded systems on the other hand...

I think the maturity of kernels of popular OS are much more battle tested than a standalone system can ever be at this point. Sure, maybe don't give essential controls to any complex (technical) system without the driver being able to correct. Just look at Boeing and MCAS, probably not based on Linux.

I think the automotive industry currently has the problem that it too often relied on proprietary systems and buses. An open system could handle the diversity of suppliers very common for most manufacturers. It can supply a common base to develop against. I don't think it is fair to compare the stability of Linux to Android, they are worlds apart.

Teslas OS is based on Linux, so we have a case were it does indeed work at least for supplemental functions. And I fear alternative choices could be the death of other manufacturers because their software would fall behind.


I have seen software for a unit in a car which had no safety relevant tasks, which was written using a custom, homegrown rtos with half of a standard library reimplemented from scratch for no reason at all.

Their supplier did all of this on purpose, to create lock-in, which worked - now they charge by line of code.

Any old Linux kernel haphazardly patched together to run on the target would be better than this garbage.

Do airplanes still run an emulator for a chip that is no longer produced because nobody wants to port the software? That was a fun story I still hope is an urban myth.


Not an airplane, but I have a few HP calculators (50G, 12C, 15C) that run an emulator for a chip that is no longer produced because nobody wants to port the software. They still make the 12C.

I wouldn't trust them to port it without introducing weird bugs after seeing stuff like the 35S anyway.


A lot if not the majority or PoS terminals around here run som DOS software, usually on top of an emulator. Sometimes you can even see a DOSBox window on top of a Windows desktop.


Just a few months ago I caught a glimpse of DOS in a multi-national home improvement store. Turns out all of their computers are fairly recent ThinkCentre machines, all running Windows 7 that is locked down to 3 programs: calc.exe, Internet Explorer 10 and some proprietary DOS emulator they pay something like 50k€ annually for, running a TUI client for their inventory and order system.

I don't even know how to sort those by level of stupidity...


If we are talking credit card terminals then this will depend a lot on implementation. You can have whatever you want on a terminal as long as it meets PCI DSS/PED requirements which mostly are concerned with safety and integrity of the transaction. Safety means your PIN and private issuer data (data your bank put on your card) stays safe and integrity means nobody can meddle with the data or the flow of the transaction (so the amount you have approved on the display is the amount of transaction in your bank is the amount printed on your printout, etc.)


> Just look at Boeing and MCAS, probably not based on Linux.

This just reinforces your general point, but it wouldn't matter if the 737 ran on Linux. The MCAS behavior is part of the "business logic". No underlying OS would have prevented that issue. Just like the safety of the Linux network stack doesn't prevent grandma from E-Mailing her CC details to a scammer.


Boeing’s management has done a good job recasting what were deliberate business decisions that drastically reduced safety into bugs and technical errors, at least in the public eye.

It doesn’t seem the FAA is being too strict on them in this either, but I suspect the Europeans and Canadians will not go as easy on Boeing as the FAA has.


The only thing I'd argue is that from what I've seen of car software, it is generally horrible and a battle-tested kernel, even if seriously stripped down, would be a marked improvement.

I think a lot of people working with these closed systems try to perhaps impress those not in the field by cloaking it in mystery, but a lot of these systems are honestly dumpster fires. Not left-pad level, sure, but nothing at the level where Linux wouldn't do.


From what I've seen Tesla doesn't use Linux for the critical parts (i.e. everything directly to the CAN bus). The gateway between the vehicle part and the rest is using FreeRTOS.

The rest is indeed Linux. I think it is a customized Ubuntu. The "rest" includes the autopilot...


You & upstream ("Imilcim") are both right, but both your examples are flawed (Android is not really Linux, MCAS is a logical design flaw).

Linux can be very stable (most of the time it is).

I think that in this context "Quality Assurance" (QA) is king, independently from what it tests.

Linux has a lot of stuff included in the kernel, and that stuff is actively used by a lot of people in a lot of different (potentially unexpected) ways => if QA can certify both custom-written-SW and Linux (for the needed usage) I would then tend to trust Linux more than a SW written by dedicated teams in companies (because of the possibility that QA could miss some tests which might have been indirectly caught by the Linux community/users).


Safety-critical systems have specific processes that need to be followed and it's definitely not as easy as using a simple OS, but it does help of course to keep things simple.

And that's the problem with Linux: it's huge and it's always changing so that one cannot make any guarantees about anything except that it's broken in known and unknown ways. Saying that it's battle tested is not as useful as one would think, since any small kernel config change would result in a different configuration with different behavior. Abstract Linux is battle tested, but the config you want to use is probably not that well tested and even if it is, the whole development process is so chaotic that there's no guarantee that somebody of sufficient skill will report even the majority of encountered bugs and that those will be fixed.


> Think about Android. You get an OS tightly managed by Google, running on purpose-built hardware (I have Pixel 2 XL) and it still regularly crashes on me and does otherwise funky things. I have no illusions car industry is going to be able to do any better.

Android regularly crashes in part because it's very complex, but also in part because there's really not much of a problem if it crashes occasionally. Cars have much stricter requirements.

The car industry have more or less settled on a process, "functional safety certification" (FuSa for short); if you can verify that you've followed the process, then you have a reasonable defense in court if the software then crashes.

> Now, I have nothing against using Linux for everything else in a car (infotainment, navigation, etc.) but I would not trust one that will have anything to do with basic controls.

Indeed; you run Linux inside a safety-certified hypervisor, and run your controls in specialized, safety-certified RTOS guests.

ARM has settled on Xen as the hypervisor for their automotive reference stack, and there are currently efforts to achieve safety certification for Xen:

https://www.youtube.com/watch?v=boh4nqPAk50


"Android regularly crashes in part because it's very complex, but also in part because there's really not much of a problem if it crashes occasionally. Cars have much stricter requirements."

I don't remember a single Android crash or freezes in recent memory (various Samsung Galaxy S-series). Not that my iPhone crashes either.


I can't remember it freezing so hard it required a restart, but I've had iPadOS 14 do its "respring" black screen with a spinner crash in the last couple of days.


iPadOS 14 is still in beta. Crashes on beta aren't representative of released product.


Whoops, meant 13 for my iPad. Mixed it up with a spare 6S where I have the iPhone beta.

Haven't seen the beta crash but I'd attribute that to not using it on either of my main iThings.


That is based on unix.


Parent comment noted "Not that my iPhone crashes either," so I figured it was worth mentioning that sometimes they do


For payment terminal, pinpad security there is concept of Hardware Security Module. This is separated processing unit in the terminal that is managed by the main application in a very limited way, so even if the security of the main, feature rich app is broken the cryptographic material and processing logic stays safe.

Another solution used in a terminal is that, technically, it is required that the pinpad terminal (keyboard + screen) must be controlled by that separate computer. This is achieved by galvanically switching keyboard and screen to perform PIN entry and then switching ownership back to the main application which then can fetch encrypted PIN block and perform rest of the transaction, never having access to either plaintext cryptographic material or PIN.

Now, this could be solutions used for the car security but it would still require separation so strong and applications so different that they could just as well be different OS-es with no commonality.


> must be controlled by that separate computer

Right, the "simple straightforward" solution for all of this is to have a separate SoC for each control unit you need.

But this has lots of things that are suboptimal:

1. Less flexibility. You need to know ahead of time exactly how many SoCs you need. If at the 11th hour you decide you need another unit (or need one fewer), you have to go back and redesign your entire electronics layout.

2. Less efficiency. If your unit typically only uses a small portion of its compute power, it's still taking up space, taking up weight, and drawing a basic amount of "idle" power.

With virtualization, your RT guest OSes typically still want their own dedicated core. But it's a lot more efficient to put a 6-core chip on a system that only needs 4 cores than to design 6 separate SoCs onto a system that in the end only needs 4 of them.


I have worked with Linux on tight embedded systems that require 100% up-time in safety critical environments.

I disagree, it's fine for automotive use, but it will be a stripped down fork optimized for it. Just like ucLinux and other special built distros.


With multicore processors becoming the norm, a combination of isocpus, some low latency kernel options, and some virtualization are all that's needed imo.

I get sub-microsecond response times on my servers which, afaik, is a few orders of magnitude better than what's needed for control systems.

Basically, on a modern ARM chipset, the requirements of a linux distro suited for cars aren't particularly exotic these days. All the logical bits are there, it just needs firmware/drivers.


Real time has nothing to do with response times or performance. It has to do with predictability and ability to reason about how much time an operation will take (most of the time it is enough to be able to have some kind of hard limit on how much time something will take).

This means that you need all your operations that you call from critical realtime code to not suddenly start doing some housekeeping.

On Linux, even as simple operation as dereferencing a pointer can cause context to be switched to kernel to suddenly start doing its memory management stuff (not really Linux fault, that's how x86 architecture is built).

So while a machine might be able to respond "orders of magnitude" better than needed for control systems it is the one response in X thousand or million that is not meeting the limit that is deal breaker here.

I have implemented an embedded controller for coin acceptor. If the driver does not respond within X milliseconds the coin passes the gate and there is one more angry customer.

I have implemented a framework for algorithmic trading for one of largest brokerage houses on Warsaw Stock Exchange. You have XX microseconds to respond to market on event and if you don't there could be financial consequences.

Both apps were implemented on Linux. In both cases I had to work my ass off to track down multitude of potential causes of unpredictability and I think I did pretty good job but I cannot give guarantee that it will always work within time limit. If there was somebody's life at stake I would refuse to do it.

Did you know that x86 CPUs stop from time to time to do their housekeeping, like checking temperature to decide whether to throttle or not? You can turn it off (at the cost of the server potentially overheating) but how many more potential sources of problems are there?


Yes, I am extremely familiar with high frequency trading, I've been doing it for about 9 years now on most of the highest volume exchanges in the world.

Satisfying hard real time requirements is hard, but not impossible. When testing, set a fault/trace point on long response times, debug, rinse, repeat.

The point is that Linux (or QNX or whatever other RTOS) have way more resources, libraries, and commoditization than a dedicated microcontroller/FPGA and are more widely tested than some proprietary OS.

And those Intel hardware faults you mention, aside from being irrelevant to the kind of architectures a car would use, are on the order of microseconds, well within the operating requirements of a car (or coin system).


What is coin in this context CoolGuySteve ? (HFT ?)


He meant algorithmic trading which is close (but not exactly the same) as high frequency trading. HFT is obsessed with latency. Algorithmic trading is mostly about automating wishes of a trader because if he can execute his strategy faster he gets a bit of an additional edge.

The coin I mentioned is completely different area. This was for a fixed odds betting terminal. The company ported their software from RTOS to Linux but they had issue with occasionally missing events generated by their custom hardware board controlling the device. This being betting terminal caused grief to customers and unhappy customers == no income.

I was contracted to solve this issue for them which I figured out would be to write a proper kernel driver for their board, restructuring of their existing code and to tune their Linux kernel/configuration.

The device they used as coin acceptor was very cheap (one figures for a device that returned its cost in couple days of operation...) and it had no controller, just analog output. The output would show the current value of something related to the induced current in the coin while the coin is falling through a piece of tubing inside an energized coil. The software has to probe the value in regular intervals while the coin is passing the sensor, and then classify as one of the possible coins or an unknown object based on duration, strength and profile of the signal. This relates to the dimensions and materials used to build the coin.

The precision at which you perform the measurements greatly influences precision of detection of various types of coins and it allows tighter tolerances which helps prevents fraud.

Once the coin passes the detector it has couple centimeters before it arrives at a gate which sorts the coins to their possible destinations.

The software has to periodically (multiple times a millisecond) monitor the value of the analog value, and then has couple milliseconds to decide what the coin is and send command to the gate.

There is not much to it unless you have some other interrupt completely disrupting the process and this is the part they could not figure out how to solve. The calculations are not complex and there is plenty of time to perform them.

I suggested they can easily implement it on their board and return buffer of all values to their Linux but they told me they have huge amount of these machines in the field and they don't want new design for their controller.


> With multicore processors becoming the norm, a combination of isocpus, some low latency kernel options, and some virtualization are all that's needed imo.

Have you worked in vehicle embedded systems or are you just pulling requirements (or lack thereof) out of your hat?

I'm not an expert, but I would guess that they do require deterministic behaviour, which, AFAIK, Linux doesn't offer. What happens if for some weird reason the scheduler starves the critical process for 40 ms in some uncommon situation that only happens 0.00001% of the time? How can you prove that such a thing won't ever happen?


His point is that there are ways to keep the OS from ever fooling with certain cores, after startup. At the extreme, you put an exo-, para-, iso-, whatsit-kernel on them, but just starting a process that does all its kernel activity at startup, then never does another system call after, is more common.

Usually that involves mapping device registers into the process address space via a custom ioctl, and then just polling in a tight loop.


That's not really enough. As an example ANY operation involving filesystem or memory access has no defined limit on how long it can take.

Your filesystem can decide to do some housekeeping while you do write and as simple thing as dereferencing a pointer can cause cause context switch to Linux kernel to do the TLB thing. You might want to allocate some memory and then Linux might decide to move a bunch of memory around before it gives the pointer back to you.

What I mean is, to be able to guarantee execution of a piece of code is not a simple task for the faint hearted. You need to forget IO. You need to forget switching context to Linux (basically forget any syscalls). You need to forget allocating any memory after program startup. You need to use really huge pages. Forget being green, because changing CPU frequency or sleeping can cause unexpected latency in the middle of some operation.

Also doing this inside VM is neigh impossible.


If you are not doing any system calls, it means you are perforce not doing file system operations.

Memory is normally pre-allocated and mlocked into place at startup. TLB misses can happen, but those are filled in hardware, just like regular cache misses, not by kernel activity (and often from cache). The "TLB shootdowns" you read about happen when memory is unmapped, so you don't. The kernel can block you if you write to memory mapping a file backed by a physical disk, so you don't.

And, yes, the core is locked in C0 state, drawing maximum power full time.

It is certainly possible to do I/O, by operating directly on the device registers that I noted were mapped into process memory. And, in fact, this is a routine method of conducting such operations. You can read about it by searching "kernel bypass".

There is usually no need or desire to do it in a VM, but kernel-bypass libraries are quite capable of mapping their registers into a VM's memory, so somebody must be doing it.

Commonly, input from network interfaces is dumped into one or more big-ass ring buffers mapped onto /dev/hugepages; and other, regular processes, running on other cores, map those read-only and pick over it for interesting bits, and maybe do file system operations of their own.


Automotive grade Linux is not controlling the car.

AGL targets infotainment systems and other aspects. It can have access to more important data but it's not controlling the car.


At a high level that seems true for now, but the site does say this:

"Although initially focused on In-Vehicle-Infotainment (IVI), AGL is the only organization planning to address all software in the vehicle, including instrument cluster, heads up display, telematics, advanced driver assistance systems (ADAS) and autonomous driving." [1]

That's getting pretty damn far into controlling the car.

[1] https://www.automotivelinux.org/subaru-outback/


If they add everything other than the last part I think its fine. I think something used for self driving should be on its own separate hardware unrelated to any other hardware used for any other purpose within a car in order to fully isolate itself at the hardware level so your infotaiment system doesnt crash the whole system rendering your car undrivable for a moment.


That's orthogonal to the OS, no? You can (and should) have separate hardware for separate functions while running AGL on the lot.


Telematics can be dangerous also. Things downstream might be using that data for things they shouldn't...safety systems, etc.

Also, couldn't ADAS be an issue? Sending the driver lagged lane encroachment info, for example.


> Also, couldn't ADAS be an issue? Sending the driver lagged lane encroachment info, for example.

"Lagged by how much?" is the qualifier that determines if that is a problem. 100ms wouldn't be a problem, 200 could cause issues but likely wouldn't result in crashes.


ADAS can break, accelerate and steer the car.


Of those only ADAS and autonomous driving sound particularly worrisome to me. Sure, a frozen instrument cluster would be inconvenient, but I'm more worried about the microcontrollers that control my breaks.


There was a bunch of attempts at this and I even have passing experience working with Intel guys on their IVI project which was integrated into AGL (https://wiki.tizen.org/IVI). I don't know what makes the people think this will work this time when it did not previously. I just know that it previously was restricted to infotainment even in its name and mission statement and now that does not seem to be true any longer.

"Automotive Grade Linux is a collaborative open source project that is bringing together automakers, suppliers and technology companies to accelerate the development and adoption of a fully open software stack for the connected car. With Linux at its core, AGL is developing an open platform from the ground up that can serve as the de facto industry standard to enable rapid development of new features and technologies."

See, nowhere it mentions this is for infotainment only.


Aren't all self-driving systems complex enough to require a full OS like linux? And those systems are very much "controlling the car".

Of course it makes sense to have a special-purpose microcontroller that handles real-time tasks related to engine and brake control. But those devices lack mmus or even caches so running linux on them is not terribly interesting.


sort of. like linux is tricky because linux implies a set of compromise and automouse requires a different set


What specifically does linux compromise that prevents its use in automotive control? Obviously specific components should be implemented using PLCs, FPGAs, or baremetal/FreeRTOS based microcontrollers but for an overall control system for an automotive vehicle, nothing specific stands out to me as preventing it from functioning properly.

Please feel free to point out all the many ways I am wrong as I imagine I am missing something here.


All the things that you would choose a rtos or bare metal for.


Intel is investing heavily into Linux for more serious stuff in automotive, like this cluster instrument that is in production already:

https://projectacrn.org/acrn-project-chery-exeed-launch/

For deterministic high frequency real time, like controlling an internal combustion engine or the damper control of the chassis, for heaven's sake use a deterministic cpu running a real time is.

However the parameterization of that real time control, based off sensor fusion, object detection, trajectory planning, all high data volume highly dynamic processes, that can happily live on Linux.

Linux thrives in high performance volume data processing in networked environments.

The challenge is to not show but to rationalize and "explain" it's dependability. It works. How can we explain why this reliability is not random, but explicably dependable?

That's what https://elisa.tech is all about, and my gut feeling is, that Linux will move from infotainment to certifyable functional safety within the next year or two.


Linux is used for SpaceX' Falcon 9 and Crew Dragon. If it can be used successfully in space flights, why shouldn't it be used in cars?


There is a huge difference here. Cars clock billions of hours of operation in an uncontrolled environment. These must deal with decades of use, abuse, neglect, sabotage, third party parts, etc. Anything that can happen WILL happen in a car.

Compared to this, rockets are tightly controlled environment and running for a very short time. There is also incredible amount of redundancy and supervision in a rocket but you will not be able to have double of every critical component in a car and then have separate controller to check if both are returning matching results and which might be faulty and you will not have a team of specialists obsessing every time something did not work exactly right.


Why wouldn't it be possible to have a similar system in a car as in a rocket? The manufacturer could easily rack up 3 somewhat cheap compute nodes to validate the outputs. This really shouldn't be much more than $10k extra - for the end consumer. And the fact that a lot of cars sell for $100k or more this cost should be negligible for self-driving capabilities.


Every production car is built down to a price, even expensive luxury cars with large margins. All car segments are extremely competitive and any efficiency you can gain over your competitors can be put into more features, higher quality, lower price, or more profit. Over-engineering isn't that much of a marketing point or Saab would still be around.

That said, mass produced electronics are cheap - even automotive grade ones. It wouldn't add that much to the BOM, it would be mostly design cost. I just can't see the justification without it being required by regulation.


I can't decide if you're serious or trolling ...


Because rockets have tight tolerance requirements, but very low reliability requirements. Consider this case [1] where missile software leaked memory because it would blow up before it ran out of memory. It would hardly be reasonable to state: "If missiles can leak memory, then why shouldn't that be okay for a car." even though missiles are probably more advanced, complicated, and more difficult to make than a car. This is because the "features" are largely orthogonal from the "reliability" of the features which is really a measure of how consistently the features as promised are delivered.

In the rocket case, you need very tight tolerances, but it does not matter much if it fails 1 in 1000 times since the total-system-failure rate is on the order of 1 in 100. A single 1 in 1000 failure mode does not massively increase total-system-failure rate. However, in the car case the tolerances are worse, but the requirement to consistently deliver on those tolerances is vastly more important. Consider Ford Pinto which sold 3,000,000 cars and had 27 deaths attributed to the fuel tank design defect. A 1 in 100,000 failure mode is viewed as an unconscionable deathtrap, so adding a single 1 in 1000 failure mode would make your system 100x deadlier than the most egregious example of an unsafe system.

The prevailing rate of system errors tells you what sort of errors a quality assurance process can detect. If you have 1 in 1000 system errors, then it means you can probably detect things on that order otherwise your system error rate would be dominated by those higher frequency errors. However, you probably can not or do not bother to detect 1 in 1,000,000,000 errors since they would either happen so infrequently compared to other errors or not be worth solving since they have such a minor impact on overall reliability.

So, in summary, reliability is different from tolerances, techniques used on low reliability systems are not necessarily applicable to high reliability systems, rockets are on the order of 10,000-1,000,000 less reliable than cars, so technology used on rockets does not necessarily apply to high reliability industries.

[1] https://groups.google.com/forum/message/raw?msg=comp.lang.ad...


To the OP's point about infotainment being okay, I wonder if the core functional components of space flight (RCS, etc), are controlled through linux. It would surprise me if it wasn't used for high level things (displays), but not the "keep the rocket flying on course" stuff. I can't really find anything about what's controlled by that Linux box in the Falcon 9.


Yes they are. iirc, they run a modified kernel that is closer to realtime, and everything is a single thread for determinism. Code runs on a dual core processor, with one copy on each core, to detect bitflips. Three such modules are connected together in most places.

I believe this is the setup they use for the main stabilization computer on each stage, as well as the engine controllers for each engine.


It is being used for high level things, so is electron.


Manual flight controls are ran on Linux via electron.


I'd love for somebody with knowledge to comment re: the SpaceX flight control platform. There's Linux in the UI to be sure, but I'd strongly suspect there's a hard-realtime OS that's doing the hardware-level communication to the flight hardware.


There was a recent reddit AMA with some SpaceX engineers that confirmed that they run a buildroot system with RT patches for most things, and a handful of microcontrollers for the timing critical stuff.


I only saw it used on the recent crew mission, but wasn't Linux basically used as an infotainment system there as well? The presenter even explicitly mentioned there was a backup system in case the fancy user interface (which was HTML+JS based) crapped out on them.


The space is a much more controlled environment and has low uptime compared to millions of cars every day


Honda has been shipping Android powered entertainment systems with direct access to the CAN bus (which may even include vehicle control, based on geohot's work) for several years now. Just as on phones, as long as this using being used for timing critical operations like an ECU and doesn't have any sort of remote network access it doesn't seem to be a major source of issues.


Direct access to an isolated CAN bus used only for vehicle infotainment


You missed his reference to comma.ai/autopilot


He could be right - I was operating under an assumption that it's the same CAN bus access they're using, it's entirely possible it's only an isolated segment.

I did not confirm this, purely an assumption on my part as it does appear to have access to HVAC, lock settings, as well as fuel information which makes me suspect it's not so isolated.


This is basically what I've heard from automotive engineers when I've asked about "Android for your car".

Paraphrasing: "You can remote jailbreak Android - do we really want to trust every vehicle in the country to use it for their control systems?"


Except that many brands use Android to power their infotainments these days. People who you talked to obviously didn't quite know what they're talking about - it's like saying embedded Linux on a computer can be "remote hacked" because there's a GNOME userspace exploit somewhere in the world.


I think the person had a qualm about specifically using Android vs an embedded Linux or other embedded software for auto control systems, and the prevalence of Android remote hacks was their concern - they were happy about the state of Android auto and infotainment consoles.

I think they are knowledgeable on the subject but I may be misquoting them or not including some nuance about the security concern.


> Paraphrasing: "You can remote jailbreak Android - do we really want to trust every vehicle in the country to use it for their control systems?"

FSB, GRU, Mossad, MI6, NSA, CIA, GCHQ, DGSE, CSIS, ASD, CSE, BND, NIS, PSIA, GCSB, ISI, RAW, MOIS, MSS, MIT, GIP, CISEN, NIA, RGB, and all other intelligence agencies in the world: "Yes, that is a splendid direction."

On a serious note, with the automotive industry's historical track record with timely updating their car software and doing so for as long as a car is kept running (with the possible exception of Tesla), I'm not sure any OS will be sufficient. Without some legal definition of orphaned automotive software being something like X unfixed CERT vulnerabilities over Y years, and legally-mandated regulation that open sources the ability to load alternative software, that security drift will always be the Achilles heel of any gear that uses software, not just cars.

This is one of the reasons I've become a lot more selective with hardware gear. Any gear that lets me use open source software immediately goes to the top of the evaluation list above closed source vendor software, because I've been stranded way too many times by hardware with closed source software the vendor abandons for a newer model.


> Paraphrasing: "You can remote jailbreak Android - do we really want to trust every vehicle in the country to use it for their control systems?"

We already trust every vehicle in the country to use jailbroken software in their control systems.

Mechanics regularly upload modified manufacturer and third party firmware.

Car enthusiasts modify and trade patched binaries for their cars' ECUs on forums like phone jailbreaking enthusiasts do on XDA-Developers.


The existence of remote attacks was perhaps the most important concern for them on using Android specifically, perhaps I should have phrased it as "remote hijacking/escalation".


Can you not jailbreak automotive software?


Seriously, what does your phone running 100s of apps like TikTok, Browers, Facebook and everything else have to do with a closed system like car infotainments?

It's like saying that Linux will be crashing while driving SCADA systems because you've seen your KDE desktop crash a few times.

C'mon you guys are supposed to be engineers, think a bit -_-


An OS that crashes when pushed moderately hard does not bode confidence for something that must have very near to 100% uptime.


Usually it's not the kernel that crashes, but the desktop environment or some other user-space software.

I can't remember the last time I encountered a genuine kernel panic on my desktop. I don't imagine that these "Linux powered cars" will be running KDE Plasma, Gnome, or anything else remotely as complex as a modern DE.


True. Android isn't such an OS though. As proven by the fact that so many vendors use it to power their hardware - including existing car manufacturers and other hardware builders.


I looked in to a bunch of options when deciding on a new ECU for my car, ultimately the idea of leaving the reliability of my vehicle up to a more complex stack of software, with dependencies, updates and bugs within the layers beneath the ECU software itself, didn't feel like the right choice.

I also really love that aftermarket ECU manufacturers have been able to find a market, and I wanted to support that. Almost all of the ECU companies I've been in contact with were started by one or two engineers as a commercial project for enthusiasts. You just don't see bread and butter circuitry and embedded software companies solving real problems like that getting a fair go in the market these days. It's a niche where talent makes your reputation, and it's not a space big enough to get "disrupted" by the Venture Capital bubble.


>Now, I have nothing against using Linux for everything else in a car (infotainment, navigation, etc.) but I would not trust one that will have anything to do with basic controls.

Comma AI has been shipping self driving software based on Android for 4 years now. It's rock solid and completely safe, with millions of miles driven [0]. The idea of everything in automotive requiring formally verified aerospace grade software is just nuts. It's better to have flexible, adaptable software whose failure modes are fully covered than to rely on an illusion of perfection that will never actually exist.

[0] https://comma.ai/


When you're talking about AI then your software suite looks very different. The actual running of the vehicles engine and subsystems is something that is very well known, and can be handled by embedded software/hardware quite easily. It's not that I don't think more complicated stacks can handle it, it's that I don't think they're needed. That said, no matter which way you skin it, if you increase complexity then you increase risk.


>That said, no matter which way you skin it, if you increase complexity then you increase risk.

This isn't really true, for eg. if you add redundancy technically you increase the complexity but also reduce risk (point being complexity will also be introduced when making the system safer, simplification just makes analysing and screening for bugs more effective, but there are other risk factors)


25 million miles is completely inadequate testing to make any such statement. The prevailing traffic fatality rate in the US is 1 in 125 million miles. 25 million miles means that the information is extremely insufficient to make any statement about comparative safety. In fact, making any such statement would be mathematical malpractice and criminally irresponsible. This also ignores the fact that unless those miles were all driven without any human correction you are actually testing the safety of a combined human+software solution which had better be better than just a human alone otherwise the system actually reduces safety.


>you are actually testing the safety of a combined human+software solution which had better be better than just a human alone otherwise the system actually reduces safety.

That's the entire point. Comma AI is a level 2 system. It is not intended to, nor will it ever be fully automated. It is a tool to augment and enhance human driving capabilities. Full autonomy at Level 3 and above is a 20+ year pipe dream at this point. Anyone advertising such a system (Tesla...) is just fraudulent.


> It's rock solid and completely safe

What is your source on this? I don't think even GeoHot says that.


> It's rock solid and completely safe

It sounds... Unbreakable!


> Now, I have nothing against using Linux for everything else in a car (infotainment, navigation, etc.)

Which is what AGL is primarily about. There is not a single set of requirements for something to be "automotive grade", neither in hardware nor in software.


Infotainment and navigation don't require "automotive grade" label. "Automotive grade" is a special label for things that meet additional stringent requirements to be able to be used in harsh environment of the car. You can run navigation on an Android phone and it will be perfectly happy without "automotive grade" label.

Go to any electronic parts manufacturer and you will find separate sets of parts, ones for normal use and then ones for "automotive", that have improved resiliency to temperature, vibration, etc.

For software, this tends to be used to describe components meeting special additional requirements with regards to stability. You wouldn't want your ABS controller to decide to break at most inopportune moment. There are special sets of standards for coding that kind of software (see MISRA) that are not required but highly suggested.

For software that has no influence on safety, the tag "automotive grade" would have no meaning.


Eh, automotive requirements are more than just for safety-critical pieces - reliability matters for non-safety functions too, and all automotive Linuxes I've seen had similar pieces build in that direction - which from my understanding AGL attempts to standardize.

Then there's things that are less critical than the hardware keeping the car on the road, but more important than the radio, and Linux is creeping in there too. Be it parts of the instrument cluster, or non-required warning systems, or visualizations of surrounding traffic, or ...

Just because the requirements placed on these components are not as strict than e.g. for engine controllers they still are additional requirements specific to the field, so I don't think the label is totally misused. Although I admit that the confusion it apparently causes every time it is mentioned points to it being a bad name nevertheless, and "automotive linux platform" would be appropriately buzzword-compliant too.


It's very easy to choose these trade offs yourself by modifying the kernel and creating your own version. Linux gets far higher real world use, meaning that even if there are errors, they will be patched upstream and you can integrate those into your patch. Code re-use helps us in this respect.

Barring a few small details, it's possible to configure linux to be a deterministic-scheduled RTOS or whatever you want, really. I think it's a good idea to use a (hardened) version of linux for these systems, and other rigorous design principles will handle the rest (for example, testing, signed binaries, program determinism, fault tolerance and redundancy).


Could you please explain why we need a full fat OS for a car anyway? I understand the importance of the ECU in modern cars, is there something that Linux can do in a car that the ECU can't?


You're confusing ECU and IVI, the ECU doesn't drive anything in a "digital cockpit" car, all the displays are handled by the IVI.


I would have agreed with you at one point, but most of the automotive companies are diving into this. They want less computers and would rather invest in hardware partitioning if needed.

BWM has said point blank all of the software in the car will be running Linux.

Adopting Linux on BMW - The Long Road to Integrate Linux as Mainline Platform - Helio Chissini de Castro, BMW https://www.youtube.com/watch?v=gNr-XmWM-eM


"and it still regularly crashes on me". Your experience differs vastly from mine.

Not to say that Android is perfect nor bullet-proof nor never-ever crashes buuuut.... It tends to be an App that crashes the OS rather than the uderlaying OS just doing a BSOD.


Most people can't tell the difference between app and OS crashes. With all the misinformation out there about it, who can blame them.


This top-level post is mostly incorrect. A linux distribution is incredibly flexible and stable if optimized for such. It is a simple task to build in exactly what is needed from the ground up from stable "LTS" sources, and nothing else. To configure for near-realtime responsiveness rather than throughput is trivial as well. That's why it scales from embedded devices to PCs to supercomputers. See: https://news.ycombinator.com/item?id=24260635

For hard realtime requirements, QNX is/was often used.

Splitting off all non-critical operations to isolated nodes is best practice.


Automotive Grade Linux is pretty much a Linux distribution for infotainment units. It's not about running Linux on ECUs.


Would an OS like QNX or VxWorks be more suitable for a car?


Parts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: