How nostalgic! Mind you, I say that in a good way!
This brings me back to the heydays of Sun Microsystems where they controlled both the hardware (SPARC) and the operating system (Solaris). Sun didn't hesitate to use this to their advantage. With Clear Linux, I see Intel doing some very similar things. Yes, this was just a story about boot time, but it led me to take a look at Clear Linux and it looks like it could be a star. I'm interested in how much more performance their distribtuion is able to squeeze out. I'm also wondering how they're dividing their focus here and what high notes they're trying to achieve.
At a strategic level, I'm assuming that they're wanting to distinguish themselves from ARM? Are they looking to create a well-tuned reference for other Linux distributions on Intel to adopt? At a tactical level, I see they're focusing on performance, availability (which boot time would be a function of)... but are they tweaking for other things such as reliability? Fault tolerance?
Until now, Clear Linux wasn't even on my radar. Today, it is. Anyone have some experiences with this distro (for better or worse) that they'd like to share?
> Anyone have some experiences with this distro (for better or worse) that they'd like to share?
Not much of one, but for a while I was playing with multiple distros trying to find the best possible power management for my laptop. (XPS 13 9380.)
Clear did better than the Ubuntu installation that came with it -- but a dead octopus would have done better than that. It was sitting at 6W for non-CPU-intensive work, and was also noticeably snapppier. Windows on the same hardware sits at 8W when idling; same as Ubuntu.
More recently I've tried Arch, though. Which somehow got that same workload down to 3.3W, albeit with a lot of adjustments... that didn't seem to affect performance, but did affect power use, clearly.
It's just too bad that all of this optimization goes out the window if I fire up a browser, but I've had a fun time finding ways to avoid that.
Apart from that, obviously Clear's software repository is far more limited than Arch's, and it also seems harder to contribute to. I don't think that problem will be going away soon.
I'd love to hear more about the tweaks you've done to Arch. I've gone little beyond a few power management things myself (On a Thinkpad X1C Gen 6). I get decent battery life, but I know it could be better. And giving up Arch would be a tough sell to myself.
I have the same laptop as you. Just install the "tlp" package (The Laptop Project). I'm using Debian but the Arch package should have the same name. You can tweak the settings (/etc/default/tlp on Debian) but out of the box that should get you arround ~4W depending on what is running on your laptop. With light usage that gives more than 10 hours of battery life, even if you don't charge the battery to the max to increase its life (see START/STOP_CHARGE_THRESH_BAT0 for Thinkpads in the tlp configuration).
If you have a SSD you can be aggressive on the disk suspend timeout, as there's no spin up/down wear issue as with mechanical disks.
With this the battery life is good enough that I never felt the need to optimize further.
Fantastic. Thanks for the info. I've installed tlp, but I never really looked too deeply into it (yet). Most days at the office I'm plugged in - and that can be a problem too with the battery constantly receiving a charge.
Awesome, thanks. My BIOS upgrades are such a mess. I get notified that I have them.... but they always fail. I haven't dug into the logs to see why. This is honestly the most lazy I've ever been with a Linux install in 20 years, mostly because everything is working "well enough" that I haven't spent time tweaking it. Better battery life/cooler bottom does sound like something I should strive for though.
I run Fedora on a Thinkpad X1C Gen 3. tlp[1] has been, by far, the best tool in optimising my power usage. powertop for some monitoring, as these days it recommends some less than ideal tunables, especially with regards to SATA power management ("med_power_with_dipm" vs "min_power" [2][3]).
Some modules also have blacklists that prevent certain power saving modes from being enabled. This is true for the snd-hda-intel module to prevent a well known popping sound. Since disabling the blacklist, I haven't encountered the issue however YMMV. Similarly to enable ASPM on the PCIE bus, I've had to force it via a kernel parameter.
With regards to the GPU, I noticed my Intel GPU can scale its frequency between 300mhz and 950mhz but the default minimum was set to 350mhz. Other such tweaks such as enabling frame buffer compression [4], can save you a few more watts.
While the Linux defaults does its best to be suitable across different hardware configurations, you'll have to meddle with many parameters to get it customised to your configuration, and that could bring some instability with it so exercise some caution.
Oh, and also be mindful of the apps you run. The Great Suspender for Chrome can save you some watts, cpu and memory by suspending tabs that are not in use. And switching to a light desktop such as Sway or i3 can also do wonders.
As I type this, my battery is reporting a draw of 3.1 watts, 30% battery remaining with 4:40 of operating time at the current workload.
Thanks! Yeah, Just running Slack and Chrome (not to mention Spotify sometimes) can send my resource usage fairly high. I went back to Vim (from a couple of Electron-based editors running in vim-emulation mode). I used i3 a bit for about a year and never quite got it to my liking. Things like bluetooth and audio management seemed kinda hacked together - because they were. i3 isn't an environment, it's just a window manager. That's totally great, but KDE seems so seamless in this area. Worth looking into again though.
The thing with arch is it’s mostly not more tweaks that give you better battery, it’s not having all the crap that Ubuntu preloads running that makes that happen.
Folks think I'm crazy for using Arch as my main dev machine. I'm not an Ubuntu hater at all (it runs my main server in the basement at home)... but it'd be awfully hard to go back to using it as my main machine.
Really? Arch is... boring. Once you've set it up, nothing happens without your permission and you can just keep developing away. Sure you have updates imposed on you, but you can delay them a bit if you dont have time right now, and its not like upgrades are indefinitely avoidable on any other OS.
Using Arch often gives me advance warning that my apps will break on users machines due to changes in libraries. I see the changes first and can add compatibility for the new library version such that it's fixed before affecting my users.
Yeah, for me, it's "just right". Most detractors claim it's "bleeding edge". What? just because it's got something newer than Postgres 9.1 available?? (this was my argument against Debian/Ubuntu for a long time)
Totally agree that it's a great daily driver. It has EVERYTHING!
IIRC Clear Linux's cpufreq governor defaults to performance in contrast to the powersave governor, which most other linux distributions appear to be using. Hence the difference in idle power consumption, i guess.
Yes, and that would be accounting for the wrong thing. Battery life can be stretched out with user hostile things like lowering the brightness to two-digit nits and disabling all background tasks, none of which really help gauge the efficiency.
In fact, what mobile device manufacturers have learned is that the best way to save power is to ramp up power to complete a non-IO task in as little time as possible then go back to a lower TDP as that is where peak efficiency (rather than minimal consumption) lies.
I think most people are interested in their battery life when they are measuring their battery life.
Numbers you get while plugged in won't have any relevance to battery life or performance. That even applies to mobile devices, despite their learnings.
I’m not aware of anything other than a MacBook or some micro ultra books that do that because they try to keep the power brick size down. Any device with a user-removable battery should be able to do that, provided you’re using an adapter that provides sufficient current.
That's a good way to estimate but not appropriate for comparing to powertop, due to overhead. Also "50%" of energy is an estimate based on voltage, not actually energy usage. That's why Apple has stopped showing percentages in their OSes.
It us not. I programmed Accu controller. The best way is to track the enery coming in and out. Using only voltage is imprecise bevause the voltage/charge graph look a bit logarihmic, so the more charge you have, the more presision you need to have while measuring charge.This graph as well as total possible energy capacity and ability to Charge depend on Temperature. Also you normally have multiple cells that age differently.
I think it varies with the age of the battery and the operating temperature (probably other things too). Although if they were clever they could probably incorporate those parameters as well..
They use coulomb counting, along with an estimate of how much safely-usable energy is left in the battery.
That estimate is usually accurate, but it can drift a bit over time if you never fully recharge or discharge the battery. Also, when the battery gets old it starts having trouble delivering full current when mostly discharged, which is why you'll sometimes see shutdowns at 5-15% remaining.
My Dell has an Sk Hynix NVMe drive that apparently doesn't have safe write on power failure. I know because it somehow woke up in my bag and ran the laptop battery down to 0.
BTRFS was not happy with the state of anything on the drive. I was able to do a rescue copy and verify it with my backups, so after a full restore everything turned out OK.
Is that all there is to it, just well supported hardware? Dell didn't configure Redhat in any way, to make better use of the hardware? I'm thinking trackpad, sleep and hibernation, graphics etc.?
> Until now, Clear Linux wasn't even on my radar. Today, it is. Anyone have some experiences with this distro (for better or worse) that they'd like to share?
I'd urge some caution. It depends on your use case and the accuracy requirements you have. Clear Linux gets a chunk of its speed boosts by using compiler optimisations that most distributions won't touch because of the possibility they'll reduce accuracy and move outside the bounds of various standards, e.g. --ffast-math. Clear Linux also sets it so that by default when you go to compile stuff, it ends up picking up their "optimised" set of compiler flags too.
It works, it's fine, it's fast. Just make sure you know what you're getting and what the consequences for your software might be (I'm sure most people _don't_ have strict IEEE standards to care about, for example)
Booting a Sun in the 90s took so long that we had the janitor go and power up the CAD lab half an hour before anyone got to the office. Whatever you are nostalgic for wasn’t a Sun.
Our Sun lab ran on SunOS 4.1.1. Its NFS implementation was so unstable we'd get perhaps an hour and a half of use at a stretch before the file server would reboot... on an 8 machine network.
When I was student, we had around 80 machines without local disk. It was working fine. They added local disks the year after with a sensible gain of performance. I loved this machine.
Been using it on my work laptop for about a year now, it's nice if you want a normal GNOME desktop and cloud tools. It silently updates itself in the background multiple times a day, which is really nice, there's really no need to do manual maintenance.
Most of the useful apps are available via flathub, which it comes with so you just install slack, vscode, etc. that way.
Clear Linux is blazing fast (compiled with Intel compilers), and supposedly has a strong security model -- but it also has an extremely limited package selection. To the point that it's just not usable for me.
Also, its package manager forces you to install "bundles" of somewhat related packages rather than just the specific packages you want and their deps. Maybe I'm missing something, but I find this very odd.
Oh interesting. I did not know it used GCC. It feels super snappy, nonetheless.
Last time I tried it was a couple of months ago. I don't recall what it was missing, but it was some core part of my normal python development environment.
My experience with Clear Linux maybe a year ago was that it was one of the more unusual linux distro's I've tinkered with. They seem to be reinventing the wheel in many places, for example, it has its own package manager, and its own unique implementation of containers. Stuff that would seem to not have much to do with performance.
Also, NVIDIA CUDA drivers didn't work. Not sure who's fault that is. Probably both NVIDIA and Intel's.
All in all, it did boot quickly on AWS, but it was not a fun time at all.
I found it weird at first that you had to explicitly state Sparc and Solaris, but I guess we’re now at the point we’re a lot of users here weren’t around during Sun’s shine
Just to underscore this, here's a video of some guy booting an Ultra 45, the last workstation Sun ever made. It takes 30s just to go from power-on to firmware initialized and another minute and a half to start Solaris. Older equipment such as a SPARCStation 20 would take much longer.
On their mid-range "Enterprise series" systems (Enterprise 3500, 450, etc.) it was much worse. I remember it taking a good 2 or 3 minutes just to get through firmware initialization...
There are some drawbacks to that. Memory flagged as hotplug can't be used by the kernel itself (because otherwise unplugging the RAM would risk crashing the system).
That includes things like network buffers and a number of things which use the total RAM size as a guideline.
There was a bug in a proposed kernel patch a while back, that we were testing. It ended up designating a good chunk of the machine's RAM as hotpluggable. Test suite was passing, but the benchmarks showed a drastic performance drop. Took a while to track down why that was happening.
Isn't the solution introducing another type of memory abstraction in the kernel next to "available since boot" and "hot pluggable", namely "later added but won't be removed"?
I think you may be confusing some of the aspects of hot-add with hot-remove. Memory which you want to later remove should be onlined as "movable" and it has some content limitations. But, not all hot-added memory must be onlined that way. You can very easily just online it as "normal", which is (believe it or not) normal and can have any allocation type placed there.
Just last weekend I was talking to one of FreeBSD's VM gurus about doing exactly this in FreeBSD too. You have to be careful about it though -- some data structures are autosized at boot, and if you're not careful you can end up with systems "running out of memory" because they have too much memory.
It's probably a good idea to audit all the things that are autosized at boot by ram size; some of them are probably much too large on a system with 128 GB or so, given some of the sizing was written when 128 MB was big. I know I've run into the size of IP fragment buffers being way too big, but not sure what else.
On Linux, most (maybe even all) of the data structures that must grow with memory size can be carved out of the memory being added. The main culprit is 'struct page' which typically needs 64b of metadata for each 4k physical page (on x86 at least). Much of the code to do this stuff is in Linux's "sparsemem" infrastructure: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
Indeed. In apps lazy loading is something you add when you need it. But for kernels, it’s the first problem you tackle: how to pick yourself up with your bootstraps.
Booting is tricky now, and it always has been. Right back to the early machines that were booted by manually entering machine code via switches.
Interestingly AWS virtual machines still boot in 16-bit mode, then bootstrap up through 32 and then 64 bit modes, last I heard.
Sorry for off-topic: I checked your HN profile and tarsnap looks like what I've been searching for a long time. Looks much more straight forward than nextcloud and open source too. Wish you success with your company.
mem= doesn't control the amount of RAM. It actually just specifies the largest physical address the kernel should try to use. The first 4GB of physical memory typically has some non-RAM things in it like the part of the physical address space that's reserved for talking to PCI devices.
If you have 2GB of RAM, 2GB of "other stuff" in the first 4GB, and specify mem=4096m, you'll end up with a kernel that sees 2GB of RAM.
2 gb user + 1 gb swap + 1 gb rootfs unpacked in ram perhaps. They discuss a decompression operation and the only portion of the sequence that would occur in normally is unpacking a rootfs in RAM.
The kernel itself is compressed, and among the first things that happen after the bootloader hands off to the kernel is that the stub at the beginning of the kernel decompresses the rest of the kernel.
And in this case they don't appear to be using any initrd for the root filesystem, just hurrying to bring up access to the full filesystem over eMMC.
SATA controller firmware has a hard-coded “drive predelay” that waits before enumerating attached devices, a relic from the days long before SSDs. Even when an option to set a custom amount is available in the BIOS, it is in addition to and not instead of, as there’s no bus master to tell it all attached disks have init and the controller may speak to them.
It wouldn’t necessarily have to be done synchronously up front at all, except when the BIOS hands off control of the system to the SATA controller it expects that afterward it has a definitive list of available boot devices and their properties, for selection/display in the boot menu and BIOS configuration.
Theoretically with UEFI and it’s NVRAM-resident settings, a boot device could be expressed as a namespace and path, and the firmware need not enumerate devices until it has determined that there’s a need to search for alternative/fallback boot devices, but the stack hasn’t been updated with that in mind.
The delay in question is in initializing the Linux driver, not in the system firmware scanning for bootable devices before running Linux. The Linux driver may be taking a while for similar reasons as the firmware delay you describe.
The hard drive predelay is also not as obsolete as you may think; I've encountered high-capacity SATA SSDs that take a surprisingly long time to come up after being hotplugged, to the point that Linux doesn't always succeed in bringing up the link on the first try.
I’m willing to bet if the stack enforced tighter tolerances for modern non-rotating disks, manufacturers wouldn’t be able to slip such shoddy firmwares into production.
I'd really like to get my raspberry pi booting in a couple seconds. I have a few ideas for things I'd like to try but the boot time is too long. Don't need X11 but I would like to use raspbian.
If you do not need the graphical stack, there is room for optimization both in kernel and user space.
On a similar board (4 ARM cores) with Ubuntu I managed to start user space applications in <2 seconds (from power on) and the complete boot (including Xorg/MATE) in <10 seconds.
If you are interested, on Medium I have some write-up (same username, last post).
Yes, I'm getting around 10-15 secs boot time for a headless raspbian. I'm sure it could be made even faster with a lightweight application specific distro.
Would it be possible on multithreaded architectures to have a thread listening for the response, then setting a flag in /proc or somewhere else to indicate that the part is now active? That way, init code that does not depend on that device can continue to run.
I've seriously got to look into running arch with a clear linux kernel. I knew they did some pretty serious optimization work, but this is definitely impressive.
Most of the bloat isn’t from the kernel. That’s the last thing you really optimize after getting rid of all the background services, startup tasks, etc.
I think you misunderstood. I was saying in a typical stack, time from POST to desktop is significantly overshadowed by userland bloat, not that Clear kernel is not worth the optimization.
Yeah, but a stock arch system is significantly less slowed than a typical stack. Getting to bootloader is still significantly longer than actual OS boot, though. I'm not usually a supporter of the "all software has to be open source" thing, but between spying issues and horrible code, it seems like coreboot is a serious benefit. Wish I could get it on my machine.
If you want just the important parts faulted in, you need to get the important parts into their own pages, preferably in order. I don’t know whether profile-guided linking is a thing.
PGO detects hot/cold functions and places them together¹ ², but I imagine whatever it needs on startup is probably not especially hot. …Though it might all be cold, which would have the same effect, actually.
―
¹ At least on GCC but I would be very surprised if clang doesn't also do this.
² You can also do this manually with __attribute__((hot)) and __attribute__((cold))
"running Gentoo" - Sure, waste your life compiling code for next to nothing performance benefits that will never add up to the extra time it took to compile the code..
Do I understand correctly that the motivation for this is that the rear-view camera on the Chery Exeed LX runs Linux? Is it a good idea to ship a car with safety components running Linux?
Linux (as in, the kernel) is extremely stable and well tested.
If the components in question utilise any significant features of the kernel, I strongly doubt some homegrown kernel-esque project will achieve more stability and reliability than Linux.
Sure, if it can be done with a (minimal) amount of embedded code then it quite possibly should be, but if a whole kernel is actually needed then Linux is a fine choice.
A huge percent of cars are already using Linux on their head units. The ones showing the backup camera.
E.g. new Mazdas have a Linux distro with DBus communicating between binaries and Opera browser to show UI. The actual UI is written in JavaScript and HTML.
This brings me back to the heydays of Sun Microsystems where they controlled both the hardware (SPARC) and the operating system (Solaris). Sun didn't hesitate to use this to their advantage. With Clear Linux, I see Intel doing some very similar things. Yes, this was just a story about boot time, but it led me to take a look at Clear Linux and it looks like it could be a star. I'm interested in how much more performance their distribtuion is able to squeeze out. I'm also wondering how they're dividing their focus here and what high notes they're trying to achieve.
At a strategic level, I'm assuming that they're wanting to distinguish themselves from ARM? Are they looking to create a well-tuned reference for other Linux distributions on Intel to adopt? At a tactical level, I see they're focusing on performance, availability (which boot time would be a function of)... but are they tweaking for other things such as reliability? Fault tolerance?
Until now, Clear Linux wasn't even on my radar. Today, it is. Anyone have some experiences with this distro (for better or worse) that they'd like to share?