Hacker News new | past | comments | ask | show | jobs | submit login
Linux Kernel Fastboot [pdf] (linuxplumbersconf.org)
414 points by agumonkey on Sept 15, 2019 | hide | past | favorite | 118 comments



How nostalgic! Mind you, I say that in a good way!

This brings me back to the heydays of Sun Microsystems where they controlled both the hardware (SPARC) and the operating system (Solaris). Sun didn't hesitate to use this to their advantage. With Clear Linux, I see Intel doing some very similar things. Yes, this was just a story about boot time, but it led me to take a look at Clear Linux and it looks like it could be a star. I'm interested in how much more performance their distribtuion is able to squeeze out. I'm also wondering how they're dividing their focus here and what high notes they're trying to achieve.

At a strategic level, I'm assuming that they're wanting to distinguish themselves from ARM? Are they looking to create a well-tuned reference for other Linux distributions on Intel to adopt? At a tactical level, I see they're focusing on performance, availability (which boot time would be a function of)... but are they tweaking for other things such as reliability? Fault tolerance?

Until now, Clear Linux wasn't even on my radar. Today, it is. Anyone have some experiences with this distro (for better or worse) that they'd like to share?


> Anyone have some experiences with this distro (for better or worse) that they'd like to share?

Not much of one, but for a while I was playing with multiple distros trying to find the best possible power management for my laptop. (XPS 13 9380.)

Clear did better than the Ubuntu installation that came with it -- but a dead octopus would have done better than that. It was sitting at 6W for non-CPU-intensive work, and was also noticeably snapppier. Windows on the same hardware sits at 8W when idling; same as Ubuntu.

More recently I've tried Arch, though. Which somehow got that same workload down to 3.3W, albeit with a lot of adjustments... that didn't seem to affect performance, but did affect power use, clearly.

It's just too bad that all of this optimization goes out the window if I fire up a browser, but I've had a fun time finding ways to avoid that.

Apart from that, obviously Clear's software repository is far more limited than Arch's, and it also seems harder to contribute to. I don't think that problem will be going away soon.


Firefox will suspend running pages when it is unmapped, so I put it on its own virtual desktop and only switch to that desktop when I need a webpage.


And, if you're like me and have dozens of tabs open for weeks on end, the OneTab extension is pretty awesome too.


I'm using "Auto Tab Discard" and I'm happy with it. It "unloads" the unused tabs and reloads them once you switch back to them.


I'd love to hear more about the tweaks you've done to Arch. I've gone little beyond a few power management things myself (On a Thinkpad X1C Gen 6). I get decent battery life, but I know it could be better. And giving up Arch would be a tough sell to myself.


I have the same laptop as you. Just install the "tlp" package (The Laptop Project). I'm using Debian but the Arch package should have the same name. You can tweak the settings (/etc/default/tlp on Debian) but out of the box that should get you arround ~4W depending on what is running on your laptop. With light usage that gives more than 10 hours of battery life, even if you don't charge the battery to the max to increase its life (see START/STOP_CHARGE_THRESH_BAT0 for Thinkpads in the tlp configuration).

If you have a SSD you can be aggressive on the disk suspend timeout, as there's no spin up/down wear issue as with mechanical disks.

With this the battery life is good enough that I never felt the need to optimize further.

[1] https://linrunner.de/en/tlp/tlp.html


Fantastic. Thanks for the info. I've installed tlp, but I never really looked too deeply into it (yet). Most days at the office I'm plugged in - and that can be a problem too with the battery constantly receiving a charge.


Very little was needed at all, and what I did do is mostly working around firmware bugs.

- Force deep sleep mode, as s2idle doesn't work.

- Set the ASPM policy. To anything. Basically the default doesn't work. (I went with 'powersave', obviously.)

- Run powertop --auto-tune on boot.

- Set the cool-bottom thermal profile, but that's just to make it work as a lap top.

- Run a BIOS upgrade. :V

(fwupdmgr did the whole job for me~)


Awesome, thanks. My BIOS upgrades are such a mess. I get notified that I have them.... but they always fail. I haven't dug into the logs to see why. This is honestly the most lazy I've ever been with a Linux install in 20 years, mostly because everything is working "well enough" that I haven't spent time tweaking it. Better battery life/cooler bottom does sound like something I should strive for though.


I run Fedora on a Thinkpad X1C Gen 3. tlp[1] has been, by far, the best tool in optimising my power usage. powertop for some monitoring, as these days it recommends some less than ideal tunables, especially with regards to SATA power management ("med_power_with_dipm" vs "min_power" [2][3]).

Some modules also have blacklists that prevent certain power saving modes from being enabled. This is true for the snd-hda-intel module to prevent a well known popping sound. Since disabling the blacklist, I haven't encountered the issue however YMMV. Similarly to enable ASPM on the PCIE bus, I've had to force it via a kernel parameter.

With regards to the GPU, I noticed my Intel GPU can scale its frequency between 300mhz and 950mhz but the default minimum was set to 350mhz. Other such tweaks such as enabling frame buffer compression [4], can save you a few more watts.

While the Linux defaults does its best to be suitable across different hardware configurations, you'll have to meddle with many parameters to get it customised to your configuration, and that could bring some instability with it so exercise some caution.

Oh, and also be mindful of the apps you run. The Great Suspender for Chrome can save you some watts, cpu and memory by suspending tabs that are not in use. And switching to a light desktop such as Sway or i3 can also do wonders.

As I type this, my battery is reporting a draw of 3.1 watts, 30% battery remaining with 4:40 of operating time at the current workload.

[1] - https://wiki.archlinux.org/index.php/TLP

[2] - https://hansdegoede.livejournal.com/18412.html

[3] - https://wiki.archlinux.org/index.php/Power_management#SATA_A...

[4] - https://wiki.archlinux.org/index.php/Intel_graphics#Framebuf...


Thanks! Yeah, Just running Slack and Chrome (not to mention Spotify sometimes) can send my resource usage fairly high. I went back to Vim (from a couple of Electron-based editors running in vim-emulation mode). I used i3 a bit for about a year and never quite got it to my liking. Things like bluetooth and audio management seemed kinda hacked together - because they were. i3 isn't an environment, it's just a window manager. That's totally great, but KDE seems so seamless in this area. Worth looking into again though.


The thing with arch is it’s mostly not more tweaks that give you better battery, it’s not having all the crap that Ubuntu preloads running that makes that happen.


Folks think I'm crazy for using Arch as my main dev machine. I'm not an Ubuntu hater at all (it runs my main server in the basement at home)... but it'd be awfully hard to go back to using it as my main machine.


Really? Arch is... boring. Once you've set it up, nothing happens without your permission and you can just keep developing away. Sure you have updates imposed on you, but you can delay them a bit if you dont have time right now, and its not like upgrades are indefinitely avoidable on any other OS.

Using Arch often gives me advance warning that my apps will break on users machines due to changes in libraries. I see the changes first and can add compatibility for the new library version such that it's fixed before affecting my users.

I love it as a daily driver.


Yeah, for me, it's "just right". Most detractors claim it's "bleeding edge". What? just because it's got something newer than Postgres 9.1 available?? (this was my argument against Debian/Ubuntu for a long time)

Totally agree that it's a great daily driver. It has EVERYTHING!


Totally, Arch feels snappier in the same hardware that I am surprised that my machine can hold a lot of power for development.


IIRC Clear Linux's cpufreq governor defaults to performance in contrast to the powersave governor, which most other linux distributions appear to be using. Hence the difference in idle power consumption, i guess.


How did you measure the power usage on Windows?

I'm assuming you used Powertop on Linux.


I used Powertop on Linux, yes.

For Windows, I simply let the laptop run until it had used 50% of the battery. Watts = Joules / second.

On Linux, I used the same methodo to calibrate against powertop. (But found less than 5% of discrepancy anyway.)


The only valid approach is to remove the battery and plug it into a kill-a-watt or similar.


Performance and power saving features are vastly different between battery and AC so I don't feel that is relevant at all.

If we are talking about battery-life at least.


Yes, and that would be accounting for the wrong thing. Battery life can be stretched out with user hostile things like lowering the brightness to two-digit nits and disabling all background tasks, none of which really help gauge the efficiency.

In fact, what mobile device manufacturers have learned is that the best way to save power is to ramp up power to complete a non-IO task in as little time as possible then go back to a lower TDP as that is where peak efficiency (rather than minimal consumption) lies.


Accounting the wrong thing?

I think most people are interested in their battery life when they are measuring their battery life.

Numbers you get while plugged in won't have any relevance to battery life or performance. That even applies to mobile devices, despite their learnings.


Many laptops aren’t stable at full speed without a battery.


I’m not aware of anything other than a MacBook or some micro ultra books that do that because they try to keep the power brick size down. Any device with a user-removable battery should be able to do that, provided you’re using an adapter that provides sufficient current.


That's a good way to estimate but not appropriate for comparing to powertop, due to overhead. Also "50%" of energy is an estimate based on voltage, not actually energy usage. That's why Apple has stopped showing percentages in their OSes.

You should use the same methodology for both.


I thought most laptop batteries use coulomb counting rather than voltage for estimating battery level?


Yes, any decent laptop for sure uses coulomb counting, not a simple voltage measurement.


I'd assume the voltage->percentage function was worked out and the percentage displayed was thus accurate. It would be weird if that wasn't the case.


It us not. I programmed Accu controller. The best way is to track the enery coming in and out. Using only voltage is imprecise bevause the voltage/charge graph look a bit logarihmic, so the more charge you have, the more presision you need to have while measuring charge.This graph as well as total possible energy capacity and ability to Charge depend on Temperature. Also you normally have multiple cells that age differently.


I think it varies with the age of the battery and the operating temperature (probably other things too). Although if they were clever they could probably incorporate those parameters as well..


They use coulomb counting, along with an estimate of how much safely-usable energy is left in the battery.

That estimate is usually accurate, but it can drift a bit over time if you never fully recharge or discharge the battery. Also, when the battery gets old it starts having trouble delivering full current when mostly discharged, which is why you'll sometimes see shutdowns at 5-15% remaining.


You could do further browser optimizations through about:about


The XPS 13 9380 is available with Linux pre-installed, why would you want to mess with that? Or was it a fun/hobby thing?


I feel that going from 8W to 3W, while gaining a more usable system that also sits on ZFS instead of ext4, is more than enough reason.

ext4 doesn't detect corruption. How am I supposed to trust it?


And corruption actually happens.

My Dell has an Sk Hynix NVMe drive that apparently doesn't have safe write on power failure. I know because it somehow woke up in my bag and ran the laptop battery down to 0.

BTRFS was not happy with the state of anything on the drive. I was able to do a rescue copy and verify it with my backups, so after a full restore everything turned out OK.


It seems like a good deal but what about the support and the upgrade path from Dell?

I get what you're saying about the filesystem. Personally, I use par2 for the most important things (photos).


What about it? It doesn't seem to be enormously useful, given I can do so much better on my own.

I got the laptop to have well-supported hardware, not because I was expecting a lot of help from Dell.


Is that all there is to it, just well supported hardware? Dell didn't configure Redhat in any way, to make better use of the hardware? I'm thinking trackpad, sleep and hibernation, graphics etc.?


Ubuntu, not Redhat.

And no, it seems they didn't. They haven't forced deep sleep, for example.


Thanks, very interesting to know about these little shortcomings, and how you solved them.


> Until now, Clear Linux wasn't even on my radar. Today, it is. Anyone have some experiences with this distro (for better or worse) that they'd like to share?

I'd urge some caution. It depends on your use case and the accuracy requirements you have. Clear Linux gets a chunk of its speed boosts by using compiler optimisations that most distributions won't touch because of the possibility they'll reduce accuracy and move outside the bounds of various standards, e.g. --ffast-math. Clear Linux also sets it so that by default when you go to compile stuff, it ends up picking up their "optimised" set of compiler flags too.

It works, it's fine, it's fast. Just make sure you know what you're getting and what the consequences for your software might be (I'm sure most people _don't_ have strict IEEE standards to care about, for example)


Booting a Sun in the 90s took so long that we had the janitor go and power up the CAD lab half an hour before anyone got to the office. Whatever you are nostalgic for wasn’t a Sun.


Our Sun lab ran on SunOS 4.1.1. Its NFS implementation was so unstable we'd get perhaps an hour and a half of use at a stretch before the file server would reboot... on an 8 machine network.


When I was student, we had around 80 machines without local disk. It was working fine. They added local disks the year after with a sensible gain of performance. I loved this machine.


Been using it on my work laptop for about a year now, it's nice if you want a normal GNOME desktop and cloud tools. It silently updates itself in the background multiple times a day, which is really nice, there's really no need to do manual maintenance.

Most of the useful apps are available via flathub, which it comes with so you just install slack, vscode, etc. that way.


Clear Linux is blazing fast (compiled with Intel compilers), and supposedly has a strong security model -- but it also has an extremely limited package selection. To the point that it's just not usable for me.

Also, its package manager forces you to install "bundles" of somewhat related packages rather than just the specific packages you want and their deps. Maybe I'm missing something, but I find this very odd.


> compiled with Intel compilers

This is a common misconception. It's not built with ICC, it's built with GCC (and some parts maybe with Clang).

When did you try Clear? They have about 6200 packages now, but I'm not sure if that's still "limited" in relative terms.


Oh interesting. I did not know it used GCC. It feels super snappy, nonetheless.

Last time I tried it was a couple of months ago. I don't recall what it was missing, but it was some core part of my normal python development environment.


My experience with Clear Linux maybe a year ago was that it was one of the more unusual linux distro's I've tinkered with. They seem to be reinventing the wheel in many places, for example, it has its own package manager, and its own unique implementation of containers. Stuff that would seem to not have much to do with performance.

Also, NVIDIA CUDA drivers didn't work. Not sure who's fault that is. Probably both NVIDIA and Intel's.

All in all, it did boot quickly on AWS, but it was not a fun time at all.


I found it weird at first that you had to explicitly state Sparc and Solaris, but I guess we’re now at the point we’re a lot of users here weren’t around during Sun’s shine


I think Sun must've tried and failed. Boot times on late 90's era Solaris/Sparc systems were atrocious compared to Linux/x86 systems of the day.


Just to underscore this, here's a video of some guy booting an Ultra 45, the last workstation Sun ever made. It takes 30s just to go from power-on to firmware initialized and another minute and a half to start Solaris. Older equipment such as a SPARCStation 20 would take much longer.

https://www.youtube.com/watch?v=BjshAKzhxTE


On their mid-range "Enterprise series" systems (Enterprise 3500, 450, etc.) it was much worse. I remember it taking a good 2 or 3 minutes just to get through firmware initialization...


I love the bit on slide 16 where initializing 8GB of RAM takes 100MS, so they boot with only 2GB and then hotplug in more memory later.


There are some drawbacks to that. Memory flagged as hotplug can't be used by the kernel itself (because otherwise unplugging the RAM would risk crashing the system). That includes things like network buffers and a number of things which use the total RAM size as a guideline.

There was a bug in a proposed kernel patch a while back, that we were testing. It ended up designating a good chunk of the machine's RAM as hotpluggable. Test suite was passing, but the benchmarks showed a drastic performance drop. Took a while to track down why that was happening.


Isn't the solution introducing another type of memory abstraction in the kernel next to "available since boot" and "hot pluggable", namely "later added but won't be removed"?


This solution didn't introduce it. The concepts being used have existed in Linux for quite some time.

Here's a patch of mine that added (some of) it from 2005: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


I think you may be confusing some of the aspects of hot-add with hot-remove. Memory which you want to later remove should be onlined as "movable" and it has some content limitations. But, not all hot-added memory must be onlined that way. You can very easily just online it as "normal", which is (believe it or not) normal and can have any allocation type placed there.


Just last weekend I was talking to one of FreeBSD's VM gurus about doing exactly this in FreeBSD too. You have to be careful about it though -- some data structures are autosized at boot, and if you're not careful you can end up with systems "running out of memory" because they have too much memory.


> some data structures are autosized at boot

It's probably a good idea to audit all the things that are autosized at boot by ram size; some of them are probably much too large on a system with 128 GB or so, given some of the sizing was written when 128 MB was big. I know I've run into the size of IP fragment buffers being way too big, but not sure what else.


Reminds me of Windows 95 "running out of memory" on systems with too much RAM.

https://devblogs.microsoft.com/oldnewthing/20030814-00/?p=42...


On Linux, most (maybe even all) of the data structures that must grow with memory size can be carved out of the memory being added. The main culprit is 'struct page' which typically needs 64b of metadata for each 4k physical page (on x86 at least). Much of the code to do this stuff is in Linux's "sparsemem" infrastructure: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


A lot of embedded systems have been doing this for years.


We've been progressively loading webpages for years. It's like the 7th layer of the stack is working its way down.


Lazy loading is not unique for the web or in anyway a new idea. The web is in fact one of the last places it has appeared in.


Indeed. In apps lazy loading is something you add when you need it. But for kernels, it’s the first problem you tackle: how to pick yourself up with your bootstraps. Booting is tricky now, and it always has been. Right back to the early machines that were booted by manually entering machine code via switches.

Interestingly AWS virtual machines still boot in 16-bit mode, then bootstrap up through 32 and then 64 bit modes, last I heard.


All Intel-compatible and emulated Intel-compatible processors do this. You'd have to run ARM to avoid it.


I thought x86 started in 64-bit mode when using UEFI?


More like the UEFI puts it into 32-bit/64-bit mode before it gets to the kernel, the hardware didn’t change.


Those of you who are interested in such things might also like to watch my BSDCan 2018 talk: https://www.youtube.com/watch?v=HMywejyXB9k


Sorry for off-topic: I checked your HN profile and tarsnap looks like what I've been searching for a long time. Looks much more straight forward than nextcloud and open source too. Wish you success with your company.


Thanks!


> “mem=4096m” in cmdline to only init 2 GB

What is going on that makes this 2:1?


mem= doesn't control the amount of RAM. It actually just specifies the largest physical address the kernel should try to use. The first 4GB of physical memory typically has some non-RAM things in it like the part of the physical address space that's reserved for talking to PCI devices.

If you have 2GB of RAM, 2GB of "other stuff" in the first 4GB, and specify mem=4096m, you'll end up with a kernel that sees 2GB of RAM.


Just a mistake, should be 2048.


2gb user + 2gb kernel?


2 gb user + 1 gb swap + 1 gb rootfs unpacked in ram perhaps. They discuss a decompression operation and the only portion of the sequence that would occur in normally is unpacking a rootfs in RAM.


The kernel itself is compressed, and among the first things that happen after the bootloader hands off to the kernel is that the stub at the beginning of the kernel decompresses the rest of the kernel.

And in this case they don't appear to be using any initrd for the root filesystem, just hurrying to bring up access to the full filesystem over eMMC.


> SATA driver init takes 100 to 200 ms even without a real disk

That's pretty interesting. Wonder what it is doing?


SATA controller firmware has a hard-coded “drive predelay” that waits before enumerating attached devices, a relic from the days long before SSDs. Even when an option to set a custom amount is available in the BIOS, it is in addition to and not instead of, as there’s no bus master to tell it all attached disks have init and the controller may speak to them.

It wouldn’t necessarily have to be done synchronously up front at all, except when the BIOS hands off control of the system to the SATA controller it expects that afterward it has a definitive list of available boot devices and their properties, for selection/display in the boot menu and BIOS configuration.

Theoretically with UEFI and it’s NVRAM-resident settings, a boot device could be expressed as a namespace and path, and the firmware need not enumerate devices until it has determined that there’s a need to search for alternative/fallback boot devices, but the stack hasn’t been updated with that in mind.


The delay in question is in initializing the Linux driver, not in the system firmware scanning for bootable devices before running Linux. The Linux driver may be taking a while for similar reasons as the firmware delay you describe.

The hard drive predelay is also not as obsolete as you may think; I've encountered high-capacity SATA SSDs that take a surprisingly long time to come up after being hotplugged, to the point that Linux doesn't always succeed in bringing up the link on the first try.


Sorry, I missed that in the discussion. Thanks.

I’m willing to bet if the stack enforced tighter tolerances for modern non-rotating disks, manufacturers wouldn’t be able to slip such shoddy firmwares into production.


Probably checking for a real disk? I guess it's got to check all the bus nodes for a disk, even if there's not one.


I'd really like to get my raspberry pi booting in a couple seconds. I have a few ideas for things I'd like to try but the boot time is too long. Don't need X11 but I would like to use raspbian.


If you do not need the graphical stack, there is room for optimization both in kernel and user space. On a similar board (4 ARM cores) with Ubuntu I managed to start user space applications in <2 seconds (from power on) and the complete boot (including Xorg/MATE) in <10 seconds. If you are interested, on Medium I have some write-up (same username, last post).



Think like an embedded developer. They’ve done a lot of work in this area.

https://elinux.org/Boot_Time


The Pi 4 is definitely much faster on boot. I wonder how much of your bottleneck is the SD read speed, though.


Yes, I'm getting around 10-15 secs boot time for a headless raspbian. I'm sure it could be made even faster with a lightweight application specific distro.



> Just remove the sleep(2700).

Such high quality discussion too!


> SATA driver init takes 100 to 200 ms even without a real disk

While it’s a joke, they’re not particularly far from the truth, as noted in a different comment here. They do indeed wait for some parts to respond.


Would it be possible on multithreaded architectures to have a thread listening for the response, then setting a flag in /proc or somewhere else to indicate that the part is now active? That way, init code that does not depend on that device can continue to run.


2700 ms is 27x further from the truth than 100 ms.


I've seriously got to look into running arch with a clear linux kernel. I knew they did some pretty serious optimization work, but this is definitely impressive.



Most of the bloat isn’t from the kernel. That’s the last thing you really optimize after getting rid of all the background services, startup tasks, etc.


Clear kernel performed better in every single test run by phoronix, except one (because it has retpoline and others did not). https://www.phoronix.com/scan.php?page=article&item=arch-ant...


I think you misunderstood. I was saying in a typical stack, time from POST to desktop is significantly overshadowed by userland bloat, not that Clear kernel is not worth the optimization.


Yeah, but a stock arch system is significantly less slowed than a typical stack. Getting to bootloader is still significantly longer than actual OS boot, though. I'm not usually a supporter of the "all software has to be open source" thing, but between spying issues and horrible code, it seems like coreboot is a serious benefit. Wish I could get it on my machine.


> Systemd is ~1.5MB - the loading time for emmc is 100ms

Surely not all of systemd is faulted in immediately, is this really an issue?


If you want just the important parts faulted in, you need to get the important parts into their own pages, preferably in order. I don’t know whether profile-guided linking is a thing.


On my long-running debian box, /lib/systemd/systemd is 1.1MiB in size.

In /proc/1/smaps, I find the following executable mapping:

  55aa05f89000-55aa06076000 r-xp 00000000 fd:01 1823                       /lib/systemd/systemd
  Size:                948 kB
  KernelPageSize:        4 kB
  MMUPageSize:           4 kB
  Rss:                 752 kB
  Pss:                 489 kB
  Shared_Clean:        504 kB
  Shared_Dirty:          0 kB
  Private_Clean:       248 kB
  Private_Dirty:         0 kB
  Referenced:          740 kB
  Anonymous:             0 kB
  LazyFree:              0 kB
  AnonHugePages:         0 kB
  ShmemPmdMapped:        0 kB
  Shared_Hugetlb:        0 kB
  Private_Hugetlb:       0 kB
  Swap:                  0 kB
  SwapPss:               0 kB
  Locked:                0 kB
  THPeligible:            0
  VmFlags: rd ex mr mw me dw 

So it appears 740KiB has been referenced, 752KiB resident.

There's also the read-only mapping:

  55aa06076000-55aa0609b000 r--p 000ec000 fd:01 1823                       /lib/systemd/systemd
  Size:                148 kB
  KernelPageSize:        4 kB
  MMUPageSize:           4 kB
  Rss:                 148 kB
  Pss:                  74 kB
  Shared_Clean:          0 kB
  Shared_Dirty:        148 kB
  Private_Clean:         0 kB
  Private_Dirty:         0 kB
  Referenced:          148 kB
  Anonymous:           148 kB
  LazyFree:              0 kB
  AnonHugePages:         0 kB
  ShmemPmdMapped:        0 kB
  Shared_Hugetlb:        0 kB
  Private_Hugetlb:       0 kB
  Swap:                  0 kB
  SwapPss:               0 kB
  Locked:                0 kB
  THPeligible:            0
  VmFlags: rd mr mw me dw ac 

Another 148KiB, so 900KiB, a good chunk of the 1.1MiB - though this is with a substantial uptime.

The really disgusting part is all the dependencies; `pmap 1` says 57MiB total, and there's not much anonymous memory to blame.


PGO detects hot/cold functions and places them together¹ ², but I imagine whatever it needs on startup is probably not especially hot. …Though it might all be cold, which would have the same effect, actually.

¹ At least on GCC but I would be very surprised if clang doesn't also do this.

² You can also do this manually with __attribute__((hot)) and __attribute__((cold))


PGO may not be the right tool for this. Mozilla used custom instrumentation and linker scripts to achieve symbol ordering optimized for startup.

https://web.archive.org/web/20100223202038/http://blog.mozil...


Awesome! I'm seriously thinking about running Gentoo with Linux Clear kernel!


"running Gentoo" - Sure, waste your life compiling code for next to nothing performance benefits that will never add up to the extra time it took to compile the code..


Site seems Not Found, is there any backup?


I have hard time understanding why do they have to have a supervisor there to begin with


P10

> Rootfs Mouting -> Rootfs Mounting ?


Do I understand correctly that the motivation for this is that the rear-view camera on the Chery Exeed LX runs Linux? Is it a good idea to ship a car with safety components running Linux?


As opposed to running what?

Linux (as in, the kernel) is extremely stable and well tested.

If the components in question utilise any significant features of the kernel, I strongly doubt some homegrown kernel-esque project will achieve more stability and reliability than Linux.

Sure, if it can be done with a (minimal) amount of embedded code then it quite possibly should be, but if a whole kernel is actually needed then Linux is a fine choice.


QNX is still fairly popular in cars. Sometimes also QNX for the more critical things and Linux for the rest.


A huge percent of cars are already using Linux on their head units. The ones showing the backup camera.

E.g. new Mazdas have a Linux distro with DBus communicating between binaries and Opera browser to show UI. The actual UI is written in JavaScript and HTML.


I miss the days when I could reverse without the aid of a computer.


Better to use a proprietary OS?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: