There is conflicting set of requirements when developing Linux and in the end it is somewhere on the spectrum of compromise between performance, security, stability, features and development cost.
And I think this is right, Linux is real software for real world and it would be wrong and wasteful for general purpose OS to prioritize one requirement to the exclusion of others.
Unfortunately, an automotive grade software absolutely requires focus on stability and you just can't get it if you also want performance, features and want to do it cheap.
I am also of the opinion that Linux is just way too complex to ever be stable enough for controlling your car. Think about Android. You get an OS tightly managed by Google, running on purpose-built hardware (I have Pixel 2 XL) and it still regularly crashes on me and does otherwise funky things. I have no illusions car industry is going to be able to do any better.
Now, I have nothing against using Linux for everything else in a car (infotainment, navigation, etc.) but I would not trust one that will have anything to do with basic controls.
Honestly I have never heard of a case where a OS too complex to be the cause of catastrophe. Embedded systems on the other hand...
I think the maturity of kernels of popular OS are much more battle tested than a standalone system can ever be at this point. Sure, maybe don't give essential controls to any complex (technical) system without the driver being able to correct. Just look at Boeing and MCAS, probably not based on Linux.
I think the automotive industry currently has the problem that it too often relied on proprietary systems and buses. An open system could handle the diversity of suppliers very common for most manufacturers. It can supply a common base to develop against. I don't think it is fair to compare the stability of Linux to Android, they are worlds apart.
Teslas OS is based on Linux, so we have a case were it does indeed work at least for supplemental functions. And I fear alternative choices could be the death of other manufacturers because their software would fall behind.
Their supplier did all of this on purpose, to create lock-in, which worked - now they charge by line of code.
Any old Linux kernel haphazardly patched together to run on the target would be better than this garbage.
Do airplanes still run an emulator for a chip that is no longer produced because nobody wants to port the software? That was a fun story I still hope is an urban myth.
I wouldn't trust them to port it without introducing weird bugs after seeing stuff like the 35S anyway.
I don't even know how to sort those by level of stupidity...
This just reinforces your general point, but it wouldn't matter if the 737 ran on Linux. The MCAS behavior is part of the "business logic". No underlying OS would have prevented that issue. Just like the safety of the Linux network stack doesn't prevent grandma from E-Mailing her CC details to a scammer.
It doesn’t seem the FAA is being too strict on them in this either, but I suspect the Europeans and Canadians will not go as easy on Boeing as the FAA has.
I think a lot of people working with these closed systems try to perhaps impress those not in the field by cloaking it in mystery, but a lot of these systems are honestly dumpster fires. Not left-pad level, sure, but nothing at the level where Linux wouldn't do.
The rest is indeed Linux. I think it is a customized Ubuntu. The "rest" includes the autopilot...
Linux can be very stable (most of the time it is).
I think that in this context "Quality Assurance" (QA) is king, independently from what it tests.
Linux has a lot of stuff included in the kernel, and that stuff is actively used by a lot of people in a lot of different (potentially unexpected) ways => if QA can certify both custom-written-SW and Linux (for the needed usage) I would then tend to trust Linux more than a SW written by dedicated teams in companies (because of the possibility that QA could miss some tests which might have been indirectly caught by the Linux community/users).
And that's the problem with Linux: it's huge and it's always changing so that one cannot make any guarantees about anything except that it's broken in known and unknown ways. Saying that it's battle tested is not as useful as one would think, since any small kernel config change would result in a different configuration with different behavior. Abstract Linux is battle tested, but the config you want to use is probably not that well tested and even if it is, the whole development process is so chaotic that there's no guarantee that somebody of sufficient skill will report even the majority of encountered bugs and that those will be fixed.
Android regularly crashes in part because it's very complex, but also in part because there's really not much of a problem if it crashes occasionally. Cars have much stricter requirements.
The car industry have more or less settled on a process, "functional safety certification" (FuSa for short); if you can verify that you've followed the process, then you have a reasonable defense in court if the software then crashes.
> Now, I have nothing against using Linux for everything else in a car (infotainment, navigation, etc.) but I would not trust one that will have anything to do with basic controls.
Indeed; you run Linux inside a safety-certified hypervisor, and run your controls in specialized, safety-certified RTOS guests.
ARM has settled on Xen as the hypervisor for their automotive reference stack, and there are currently efforts to achieve safety certification for Xen:
I don't remember a single Android crash or freezes in recent memory (various Samsung Galaxy S-series). Not that my iPhone crashes either.
Haven't seen the beta crash but I'd attribute that to not using it on either of my main iThings.
Another solution used in a terminal is that, technically, it is required that the pinpad terminal (keyboard + screen) must be controlled by that separate computer. This is achieved by galvanically switching keyboard and screen to perform PIN entry and then switching ownership back to the main application which then can fetch encrypted PIN block and perform rest of the transaction, never having access to either plaintext cryptographic material or PIN.
Now, this could be solutions used for the car security but it would still require separation so strong and applications so different that they could just as well be different OS-es with no commonality.
Right, the "simple straightforward" solution for all of this is to have a separate SoC for each control unit you need.
But this has lots of things that are suboptimal:
1. Less flexibility. You need to know ahead of time exactly how many SoCs you need. If at the 11th hour you decide you need another unit (or need one fewer), you have to go back and redesign your entire electronics layout.
2. Less efficiency. If your unit typically only uses a small portion of its compute power, it's still taking up space, taking up weight, and drawing a basic amount of "idle" power.
With virtualization, your RT guest OSes typically still want their own dedicated core. But it's a lot more efficient to put a 6-core chip on a system that only needs 4 cores than to design 6 separate SoCs onto a system that in the end only needs 4 of them.
I disagree, it's fine for automotive use, but it will be a stripped down fork optimized for it. Just like ucLinux and other special built distros.
I get sub-microsecond response times on my servers which, afaik, is a few orders of magnitude better than what's needed for control systems.
Basically, on a modern ARM chipset, the requirements of a linux distro suited for cars aren't particularly exotic these days. All the logical bits are there, it just needs firmware/drivers.
This means that you need all your operations that you call from critical realtime code to not suddenly start doing some housekeeping.
On Linux, even as simple operation as dereferencing a pointer can cause context to be switched to kernel to suddenly start doing its memory management stuff (not really Linux fault, that's how x86 architecture is built).
So while a machine might be able to respond "orders of magnitude" better than needed for control systems it is the one response in X thousand or million that is not meeting the limit that is deal breaker here.
I have implemented an embedded controller for coin acceptor. If the driver does not respond within X milliseconds the coin passes the gate and there is one more angry customer.
I have implemented a framework for algorithmic trading for one of largest brokerage houses on Warsaw Stock Exchange. You have XX microseconds to respond to market on event and if you don't there could be financial consequences.
Both apps were implemented on Linux. In both cases I had to work my ass off to track down multitude of potential causes of unpredictability and I think I did pretty good job but I cannot give guarantee that it will always work within time limit. If there was somebody's life at stake I would refuse to do it.
Did you know that x86 CPUs stop from time to time to do their housekeeping, like checking temperature to decide whether to throttle or not? You can turn it off (at the cost of the server potentially overheating) but how many more potential sources of problems are there?
Satisfying hard real time requirements is hard, but not impossible. When testing, set a fault/trace point on long response times, debug, rinse, repeat.
The point is that Linux (or QNX or whatever other RTOS) have way more resources, libraries, and commoditization than a dedicated microcontroller/FPGA and are more widely tested than some proprietary OS.
And those Intel hardware faults you mention, aside from being irrelevant to the kind of architectures a car would use, are on the order of microseconds, well within the operating requirements of a car (or coin system).
The coin I mentioned is completely different area. This was for a fixed odds betting terminal. The company ported their software from RTOS to Linux but they had issue with occasionally missing events generated by their custom hardware board controlling the device. This being betting terminal caused grief to customers and unhappy customers == no income.
I was contracted to solve this issue for them which I figured out would be to write a proper kernel driver for their board, restructuring of their existing code and to tune their Linux kernel/configuration.
The device they used as coin acceptor was very cheap (one figures for a device that returned its cost in couple days of operation...) and it had no controller, just analog output. The output would show the current value of something related to the induced current in the coin while the coin is falling through a piece of tubing inside an energized coil. The software has to probe the value in regular intervals while the coin is passing the sensor, and then classify as one of the possible coins or an unknown object based on duration, strength and profile of the signal. This relates to the dimensions and materials used to build the coin.
The precision at which you perform the measurements greatly influences precision of detection of various types of coins and it allows tighter tolerances which helps prevents fraud.
Once the coin passes the detector it has couple centimeters before it arrives at a gate which sorts the coins to their possible destinations.
The software has to periodically (multiple times a millisecond) monitor the value of the analog value, and then has couple milliseconds to decide what the coin is and send command to the gate.
There is not much to it unless you have some other interrupt completely disrupting the process and this is the part they could not figure out how to solve. The calculations are not complex and there is plenty of time to perform them.
I suggested they can easily implement it on their board and return buffer of all values to their Linux but they told me they have huge amount of these machines in the field and they don't want new design for their controller.
Have you worked in vehicle embedded systems or are you just pulling requirements (or lack thereof) out of your hat?
I'm not an expert, but I would guess that they do require deterministic behaviour, which, AFAIK, Linux doesn't offer. What happens if for some weird reason the scheduler starves the critical process for 40 ms in some uncommon situation that only happens 0.00001% of the time? How can you prove that such a thing won't ever happen?
Usually that involves mapping device registers into the process address space via a custom ioctl, and then just polling in a tight loop.
Your filesystem can decide to do some housekeeping while you do write and as simple thing as dereferencing a pointer can cause cause context switch to Linux kernel to do the TLB thing. You might want to allocate some memory and then Linux might decide to move a bunch of memory around before it gives the pointer back to you.
What I mean is, to be able to guarantee execution of a piece of code is not a simple task for the faint hearted. You need to forget IO. You need to forget switching context to Linux (basically forget any syscalls). You need to forget allocating any memory after program startup. You need to use really huge pages. Forget being green, because changing CPU frequency or sleeping can cause unexpected latency in the middle of some operation.
Also doing this inside VM is neigh impossible.
Memory is normally pre-allocated and mlocked into place at startup. TLB misses can happen, but those are filled in hardware, just like regular cache misses, not by kernel activity (and often from cache). The "TLB shootdowns" you read about happen when memory is unmapped, so you don't. The kernel can block you if you write to memory mapping a file backed by a physical disk, so you don't.
And, yes, the core is locked in C0 state, drawing maximum power full time.
It is certainly possible to do I/O, by operating directly on the device registers that I noted were mapped into process memory. And, in fact, this is a routine method of conducting such operations. You can read about it by searching "kernel bypass".
There is usually no need or desire to do it in a VM, but kernel-bypass libraries are quite capable of mapping their registers into a VM's memory, so somebody must be doing it.
Commonly, input from network interfaces is dumped into one or more big-ass ring buffers mapped onto /dev/hugepages; and other, regular processes, running on other cores, map those read-only and pick over it for interesting bits, and maybe do file system operations of their own.
AGL targets infotainment systems and other aspects. It can have access to more important data but it's not controlling the car.
"Although initially focused on In-Vehicle-Infotainment (IVI), AGL is the only organization planning to address all software in the vehicle, including instrument cluster, heads up display, telematics, advanced driver assistance systems (ADAS) and autonomous driving." 
That's getting pretty damn far into controlling the car.
Also, couldn't ADAS be an issue? Sending the driver lagged lane encroachment info, for example.
"Lagged by how much?" is the qualifier that determines if that is a problem. 100ms wouldn't be a problem, 200 could cause issues but likely wouldn't result in crashes.
"Automotive Grade Linux is a collaborative open source project that is bringing together automakers, suppliers and technology companies to accelerate the development and adoption of a fully open software stack for the connected car. With Linux at its core, AGL is developing an open platform from the ground up that can serve as the de facto industry standard to enable rapid development of new features and technologies."
See, nowhere it mentions this is for infotainment only.
Of course it makes sense to have a special-purpose microcontroller that handles real-time tasks related to engine and brake control. But those devices lack mmus or even caches so running linux on them is not terribly interesting.
Please feel free to point out all the many ways I am wrong as I imagine I am missing something here.
For deterministic high frequency real time, like controlling an internal combustion engine or the damper control of the chassis, for heaven's sake use a deterministic cpu running a real time is.
However the parameterization of that real time control, based off sensor fusion, object detection, trajectory planning, all high data volume highly dynamic processes, that can happily live on Linux.
Linux thrives in high performance volume data processing in networked environments.
The challenge is to not show but to rationalize and "explain" it's dependability. It works. How can we explain why this reliability is not random, but explicably dependable?
That's what https://elisa.tech is all about, and my gut feeling is, that Linux will move from infotainment to certifyable functional safety within the next year or two.
Compared to this, rockets are tightly controlled environment and running for a very short time. There is also incredible amount of redundancy and supervision in a rocket but you will not be able to have double of every critical component in a car and then have separate controller to check if both are returning matching results and which might be faulty and you will not have a team of specialists obsessing every time something did not work exactly right.
That said, mass produced electronics are cheap - even automotive grade ones. It wouldn't add that much to the BOM, it would be mostly design cost. I just can't see the justification without it being required by regulation.
In the rocket case, you need very tight tolerances, but it does not matter much if it fails 1 in 1000 times since the total-system-failure rate is on the order of 1 in 100. A single 1 in 1000 failure mode does not massively increase total-system-failure rate. However, in the car case the tolerances are worse, but the requirement to consistently deliver on those tolerances is vastly more important. Consider Ford Pinto which sold 3,000,000 cars and had 27 deaths attributed to the fuel tank design defect. A 1 in 100,000 failure mode is viewed as an unconscionable deathtrap, so adding a single 1 in 1000 failure mode would make your system 100x deadlier than the most egregious example of an unsafe system.
The prevailing rate of system errors tells you what sort of errors a quality assurance process can detect. If you have 1 in 1000 system errors, then it means you can probably detect things on that order otherwise your system error rate would be dominated by those higher frequency errors. However, you probably can not or do not bother to detect 1 in 1,000,000,000 errors since they would either happen so infrequently compared to other errors or not be worth solving since they have such a minor impact on overall reliability.
So, in summary, reliability is different from tolerances, techniques used on low reliability systems are not necessarily applicable to high reliability systems, rockets are on the order of 10,000-1,000,000 less reliable than cars, so technology used on rockets does not necessarily apply to high reliability industries.
I believe this is the setup they use for the main stabilization computer on each stage, as well as the engine controllers for each engine.
I did not confirm this, purely an assumption on my part as it does appear to have access to HVAC, lock settings, as well as fuel information which makes me suspect it's not so isolated.
Paraphrasing: "You can remote jailbreak Android - do we really want to trust every vehicle in the country to use it for their control systems?"
I think they are knowledgeable on the subject but I may be misquoting them or not including some nuance about the security concern.
FSB, GRU, Mossad, MI6, NSA, CIA, GCHQ, DGSE, CSIS, ASD, CSE, BND, NIS, PSIA, GCSB, ISI, RAW, MOIS, MSS, MIT, GIP, CISEN, NIA, RGB, and all other intelligence agencies in the world: "Yes, that is a splendid direction."
On a serious note, with the automotive industry's historical track record with timely updating their car software and doing so for as long as a car is kept running (with the possible exception of Tesla), I'm not sure any OS will be sufficient. Without some legal definition of orphaned automotive software being something like X unfixed CERT vulnerabilities over Y years, and legally-mandated regulation that open sources the ability to load alternative software, that security drift will always be the Achilles heel of any gear that uses software, not just cars.
This is one of the reasons I've become a lot more selective with hardware gear. Any gear that lets me use open source software immediately goes to the top of the evaluation list above closed source vendor software, because I've been stranded way too many times by hardware with closed source software the vendor abandons for a newer model.
We already trust every vehicle in the country to use jailbroken software in their control systems.
Mechanics regularly upload modified manufacturer and third party firmware.
Car enthusiasts modify and trade patched binaries for their cars' ECUs on forums like phone jailbreaking enthusiasts do on XDA-Developers.
It's like saying that Linux will be crashing while driving SCADA systems because you've seen your KDE desktop crash a few times.
C'mon you guys are supposed to be engineers, think a bit -_-
I can't remember the last time I encountered a genuine kernel panic on my desktop. I don't imagine that these "Linux powered cars" will be running KDE Plasma, Gnome, or anything else remotely as complex as a modern DE.
I also really love that aftermarket ECU manufacturers have been able to find a market, and I wanted to support that. Almost all of the ECU companies I've been in contact with were started by one or two engineers as a commercial project for enthusiasts. You just don't see bread and butter circuitry and embedded software companies solving real problems like that getting a fair go in the market these days. It's a niche where talent makes your reputation, and it's not a space big enough to get "disrupted" by the Venture Capital bubble.
Comma AI has been shipping self driving software based on Android for 4 years now. It's rock solid and completely safe, with millions of miles driven . The idea of everything in automotive requiring formally verified aerospace grade software is just nuts. It's better to have flexible, adaptable software whose failure modes are fully covered than to rely on an illusion of perfection that will never actually exist.
This isn't really true, for eg. if you add redundancy technically you increase the complexity but also reduce risk (point being complexity will also be introduced when making the system safer, simplification just makes analysing and screening for bugs more effective, but there are other risk factors)
That's the entire point. Comma AI is a level 2 system. It is not intended to, nor will it ever be fully automated. It is a tool to augment and enhance human driving capabilities. Full autonomy at Level 3 and above is a 20+ year pipe dream at this point. Anyone advertising such a system (Tesla...) is just fraudulent.
What is your source on this? I don't think even GeoHot says that.
It sounds... Unbreakable!
Which is what AGL is primarily about. There is not a single set of requirements for something to be "automotive grade", neither in hardware nor in software.
Go to any electronic parts manufacturer and you will find separate sets of parts, ones for normal use and then ones for "automotive", that have improved resiliency to temperature, vibration, etc.
For software, this tends to be used to describe components meeting special additional requirements with regards to stability. You wouldn't want your ABS controller to decide to break at most inopportune moment. There are special sets of standards for coding that kind of software (see MISRA) that are not required but highly suggested.
For software that has no influence on safety, the tag "automotive grade" would have no meaning.
Then there's things that are less critical than the hardware keeping the car on the road, but more important than the radio, and Linux is creeping in there too. Be it parts of the instrument cluster, or non-required warning systems, or visualizations of surrounding traffic, or ...
Just because the requirements placed on these components are not as strict than e.g. for engine controllers they still are additional requirements specific to the field, so I don't think the label is totally misused. Although I admit that the confusion it apparently causes every time it is mentioned points to it being a bad name nevertheless, and "automotive linux platform" would be appropriately buzzword-compliant too.
Barring a few small details, it's possible to configure linux to be a deterministic-scheduled RTOS or whatever you want, really. I think it's a good idea to use a (hardened) version of linux for these systems, and other rigorous design principles will handle the rest (for example, testing, signed binaries, program determinism, fault tolerance and redundancy).
BWM has said point blank all of the software in the car will be running Linux.
Adopting Linux on BMW - The Long Road to Integrate Linux as Mainline Platform - Helio Chissini de Castro, BMW
Not to say that Android is perfect nor bullet-proof nor never-ever crashes buuuut.... It tends to be an App that crashes the OS rather than the uderlaying OS just doing a BSOD.
For hard realtime requirements, QNX is/was often used.
Splitting off all non-critical operations to isolated nodes is best practice.
It is nice to have a alternative to Google's car spy software, but I am suspicious of any ECU containing malloc. If ECUs gets so complicated that they require Linux maybe there just is a fundamental design issue.
(EDIT: ECU as in Electric control unit, not only the Engine control unit.)
Otherwise it's a rather nicely designed system - using dbus for communication without too much of a wierd hacks.
On the other hand, with or without Android Auto, my infotainment is wildly unstable, very slow to start (and inconsistently so), and generally sluggish. Starting this spring, Android Auto often crashes out a few seconds in, or if I try to interact with anything (though that could be in part or in whole on the phone side - stock Google Pixel 3...)
Are you using the AIO Auto implementation or the official Mazda one?
https://www.amazon.com/Mazda-CarPlay-Retrofit-0000-8F-Z34-TK... / installation instructions: https://cdn.shopify.com/s/files/1/2367/6353/files/CPAA_INSTA...
I heard that USA dealers charge about 400$ for both eqipment and installation work. Here in EU I paid about 250EUR.
At the time I never heard anyone - not infotainment guys, not ECU, not the Linux geeks - seriously proposing to put Linux there.
The sane thing to do when using a general purpose OS as an RTOS, you pre-allocate memory during an init phase then run the control loop as if statically allocated. An ECU (or any tight control loop really) should have ZERO need to dynamically allocate memory while running.
You wouldn’t want a malformed mp3 to cause catastrophic engine failure. You would isolate the two systems completely.
Likewise something like auto pilot would sit above the ECU and be isolated from anything accepting user data for the same reason - you need hard deadlines for the various navigation functions to operate within - not as tight as the ECU where you are counting click ticks but much more strict then typical general purpose OS.
The main reason are the fast MMU IO drivers available for the IO boards, like the 1GB network card or firewire. You dont want to write those drivers. The OS itself is quite trivial compared to that. That's why they use the antique and insecure design of Linux, compared to a modern, small microkernel.
I wrote a C++ cross-compiler for that. And we even linked Fortran BLAS into that. dSpace only supports C, but you really want overloaded ops for matrix calculations and such.
There's no malloc, no exceptions, no terminal IO, no HD.
This doesn't make any sense. The SSME 1st gen controllers were from the 60s, the 2nd gen controllers 68k-based and designed in the early 80s. The main avionics were AP-101s, which predate Linux by decades as well.
Just because something once seemed like a bad idea to me, doesn't mean it won't be done - perhaps successfully!
The engine stalling whenever the GC runs is a bonus feature.
An engine running at 3000 RPM is only doing 50 rotations per second, so you have a 20 ms budget to calculate whatever you want, hit whatever spark plug timing you need and then GC.
The engine should still work at redline, so lets go for a conservative 6000 RPM, or 10ms budget. The spark plugs don't fire at the same time either, but (as far as I know) every second rotation in a staggered manner. So with a 4 piston engine you'd have 5ms (one firing every 180 deg) to do calculations, check for failure conditions and run the gc, with the latter one needing to finish in time.
It might be possible to do a proof of concept, but I don't think its that easy.
It's a seriously hard optimization problem, one that's not yet up to the task of providing reliable engine management quickly enough to also provide throttle-by-wire response in general production cars that isn't as spongy as an old Mercedes diesel sedan. I mean, if Mazda could provide throttle response for the new throttle-by-wire Miata that was remotely similar to a 30 year old model, I think they would.
Also, the ECU does more than just the cylinder values, it's monitoring other systems as well, such as the transmission, traction control, &c which also have hard deadlines.
I don't think there is enough time to garbage collect, and as other gave stated, there should be little to no need to allocate in this type of code anyway.
It sure is in cpython. `a = 1` may need to allocate/reallocate the dict holding the local/global variables. If an integer exceeds 256 or goes below -5 you're getting an allocation for the object - regardless of whether that number has been seen before or not. Of course you can forget about floating point and function calls - maybe even function calls to C functions.
There's for sure no dynamic memory, no recursion, only fixed memory slots.
how much of the above is a typo and how much is real?
I'm pretty sure 'KHz' is a typo which maybe should be kHz as in the start of the sentence. But even then, why are the very best running 50x slower than the trivial ones? Or did you mean MHz?
Do you have a link to an example of an F1 ECU with the clock rate specified?
Thing is, you can use the whole repeating schedule, parameterized with adjustments to instantaneous RPM, for many cycles before there is need to check for e.g. throttle input changes.
In between, it is a dance -- shoot some fuel into 1, wait a bit, stop shooting fuel, wait a bit, start closing a valve, wait a bit, schedule a spark, wait a bit, ...
All the settings are chosen, whenever input changes, by table lookup. So tuning means fooling with table entries.
The bottleneck is not the CPU, nor the model running in the loop, but the IO devices and driver's costing latency. In real-time latency kills you. Which turns out to be bended cables, missing resistors, broken filters, or such. A simple cycle-miss in real-time means reboot or worse.
Sorry, no links afaik. Internal knowledge.
The former is a safety critical hard real-time system, the latter can be anything including the widely open "infotainment" system.
My understanding is that AGL currently mostly targets infotainment systems.
Given the auto-update fad that breaks my stuff all the time on my office Win 10 computer (both from MS and the company IT department). I'm not very keen on having over the air updates for ECUs or the Infotainment system. At-least not automatic.
Nobody uses assembly code if they can avoid it. But the machine instructions that implement the code are known and unchanging. And nobody re-uses that memory for different things at different times, if there is enough for each use to have its own. Which there always is, nowadays.
Note: Python 2.7.3 or greater excluding Python 3.x, which is not supported.
I'm not a developer by trade or talent, but I'm learning and I'm starting with Python. Everything I've read and been told about Python development says to start with 3.x because the 2.x branch is officially deprecated and should only be targeted if one is intending to learn specifically for maintaining a legacy project that still uses it. I'm learning to learn, not to work, so I think I'll wait until AGL is brought up to a current version of the language before I dive in and get myself confused.
Beyond that though, I do like the premise of having the infotainment stack open source from the ground up. It would be nice if manufacturers would pay attention to this project and write open source drivers for their hardware.
1) Hardware constraints.
2) Software must be real-time.
Non-embedded developers are usually unable to understand the constrains of embedded and chalk it up to "you are dinosaurs". Not true.
The lack of Py3 support is not about Python vs $language on $environment, it's about using EOL software in general.
 - my last public embedded software contribution was in fact an ELF loader for a smartwatch so that I wouldn't have to use micropython: https://git.card10.badge.events.ccc.de/card10/firmware/-/tre...
But again, I'm not a developer and I'm just getting my feet wet so my opinion is likely wrong.
Maybe they don't understand automotive is old. There is more tech debt than you can imagine.
If you're never paying back your tech debt, it's not tech debt. It's just poor software engineering practices.
I’d rather just put a normal Linux distro there, however, I’m not feeling like waiting 8 seconds every time I turn in my car.
Apple CarPlay is in the same boat requiring an MFi private certificate, and no way to provision a short lived cert like they do for iPhones.
I've been working on a Pi Zero W based carputer (really, it's headless and just interfaces with my phone over Bluetooth / the existing radio head unit and steering wheel menu and controls).
Boot time is probably around ~8 second mark currently (fully booted with buildroot / systemd, it's powered from 12V switched by the key) and it already feels fast enough. I normally start my car, put on my seat belt and look around. By the time I've finished doing that it has connected to my phone.
Of course, it would be possible to do even smarter things. Hook up a low power micro that can interface with the CAN bus and a MOSFET, and you can instead power it from the battery and turn it on when the car is unlocked and off after a timeout based on the engine going off.
if Tesla could write their UI stack with the LGPL version of Qt, likely you can do so too (https://soylentnews.org/article.pl?sid=18/05/20/1737213)
Suspend-to-RAM works decently to avoid this, and to avoid it sucking the battery dry, Suspend-to-Flash can be used. Yes this will take some time for restoring RAM... but again, to avoid the user noticing that time, why not start the wakeup process when the user unlocks their car / opens a door?
On desktops and laptops, it Usually Should Just Work (tm). The general culprits for issues are proprietary graphics and wifi drivers as well as shoddy BIOSes with broken ACPI tables, but I never had to dig this deep.
For embedded systems, the situation is more complex, as the chipset needs to support it - suspend itself is not the problem in most cases, but wakeup is - for example, the Raspberry Pi does not have a dedicated interrupt (https://github.com/raspberrypi/linux/issues/1281). If you're working with an embedded chipset, your best bet is the manufacturer, they have to deal with that in the BSP.
lucky you - I have a laptop (GS65) where suspend-to-ram occasionally fails to unsuspend, even on windows
Sadly there is not a really big community around both of these.
Test builds should have ksan/asan turned on by default.
Production builds should ruthlessly remove unused binaries and make heavy use of --gc-sections to cull dead code. Even SQLITE3 should have features whitelisted to make it more safe.
Several TLA+ and Alloy specs need to be written to ensure distro saftey.
But for the dataplane software responsible for dashboard clusters and the like, this scares the hell out of me, because linux isn't an RTOS and because of all the ways you'd ahve to protect that system from anything else running on that same kernel (eg: an update to applecar play or android auto adds/exacerbates a resource leak). If the vehicle system was designed with linux in mind from the beginning, having linux handle the data plane startup and restart as well could be ok (because you'd probably make user-space programs with direct hardware access to dedicated hardware and priority that preempts the kernel to make the code as predictable as possible), but combining the dataplane for driving the vehicle with infortainment into a single OS scares me. A container or two could work here, but now we're talking about running a container at a priority high enough to linux to fuck off it's in the middle of something, and the base OS would need to be locked down decently hard as well to ensure misconfigurations wouldn't bork that dataplane container.
Really, i'd much rather have it (keep it?) the way airplanes are designed where the dataplane and control plane for manuevering the vehicle are entirely separate from any other systems on the vehicle down to at least vlans with well-defined APIs for the maneuvering system reporting data to less-critical systems.
ASIL-D is most likely realized by a central ECU who manages state.
No you don't. What you're hearing is this dead horse being constantly dragged out for another round of beatings. The problem is, nobody has ever defined what the phrase even means, so it can never be `true´.
Naturally the only ones that succeeded at it only rely on the Linux kernel as an implementation detail, and could actually use any POSIX like kernel.
"The future is already here — it's just not very evenly distributed." The year of the Linux desktop was a decade ago, and it remains perfectly usable.
Just don't name it 0day, or get DoSed...
Great! Now you have AGL in your car. First thing you doing is plug in your phone for Android Auto or Car Play anyway.
What car companies should focus on is adding things like wireless Car Play and Android support. Also Car Play supports multiple independent displays, which no auto manufacturer implements either.
It’s a replacement interface that runs on your iPhone, but it’s not an infotainment system. It doesn’t do any of the heavy lifting those systems have to do to make CarPlay possible.
Guess what, he said: "think of Windows for cars"
That made me laugh hard. If they do what he says, it will be just another nail in the coffin of good ol' Daimler.
What is it that you really need, more than CAN, SOME/IP and a small Web-Server/-Client ?
Sciopta can even be bought ASIL-D/SIL3 certified.
Fills your 4MB flash not even half
How is that better than paying for Linux support, where there's a market for this service, rather than a monopoly?
I don't think Windows Automotive got a fair shake.
And even on server room, thanks to cloud, managed runtimes and servless, Linux's relevance is going to don't matter in the upcoming decade.
Your generic talking points about Linux in other fields are completely irrelevant for that.