- The 5.4 kernel can use CIFS as a root filesystem, meaning CIFS can replace NFS for diskless boot.
- The 5.3 and 5.4 kernels both include prep for merging PREEMPT_RT into mainline. This will be a fantastic addition that benefits embedded Linux nerds like me.
Edit: Thanks for the downvotes without commenting what I should do differently next time?
The PREEMPT_RT patch solves this by reimplementing all kernel spinlocks as mutexes with priority inheritance that are fully preemptible and all interrupt handlers as preemptable threads.
The result is that if you have a bunch of real-time process/thread that do not make system calls, it's guaranteed that in a bounded amount time the highest priority set will be running.
If you do however make system calls (or even if you cause page faults because you didn't call mlockall() and accessed non-mlocked memory), with the exception of some system calls that provide better guarantees, then you can still have kernel spinlock contention with other threads that may hold the same spinlock for an unbounded amount of time, so the system is more limited than dedicated real-time OSes that are careful about this.
- Will these kernels become obsolete/unnecessary with the merge?
- What about BFQ, is it going to be merged at the same time too?
- This patchset has been existing for a long time. What prevented an earlier merge?
- Is the merge already scheduled to happen for a specific kernel release, or "when it's ready"?
If it merges, sure. It's nice to see it may be getting close.
> What about BFQ, is it going to be merged at the same time too?
BFQ merged a couple of years ago.
> This patchset has been existing for a long time. What prevented an earlier merge?
It's been scary in the amount of things it touches and the amount of semantics it changes. It's a big change in mindset. Many drivers have been broken or simply unsupported at times on the preempt-rt branches.
More and more of the underlying infrastructure has made it in over the past couple years.
> Is the merge already scheduled to happen for a specific kernel release, or "when it's ready"?
Nope... the most we can say is it is "close".
> "BFQ merged a couple of years ago."
Okay, got it. My question was driven by confusion of what exactly was package https://aur.archlinux.org/packages/linux-rt-bfq/ . Reading now the wiki in GitHub ( https://github.com/sirlucjan/bfq-mq-lucjan/wiki ), it's clear:
> Development version of BFQ
> The development version of BFQ [...] differs from the production version in that:
> - it contains commits not available for that kernel version;
> - it contains a lot of consistency checks to detect possible malfunctions.
So, linux-rt-bfq is linux-rt, sprinkled with extra fixes to bfq unmerged yet to mainline.
The above is for kernel 5.4, as the patchset for 5.5 doesn't exist at the time of this post.
After applying the patches, you'll want to run "make nconfig" or "make menuconfig" and search for CONFIG_PREEMPT_RT and enable it and you should be good to go. The only downside in my experience is Virtual Box doesn't work with the -rt kernel as of right now, so I can't use it as my daily driver. It is a my preferred kernel though for gaming and overall desktop use. I use a custom low-latency kernel when I'm not using the -rt and it's been working well for me.
Aaaand downvotes for this as well, so I guess that means this is not it?
> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
Folks downvote comments about downvoting because it's boring to read, and to discourage others from doing it.
Early downvotes can happen through accident or randomness; all it takes is one or two people to misread your comment, or mistakenly tap the down arrow. Or they might have a valid reason for disagreeing and are in the process of writing a comment putting a different point of view.
But getting offended and defensive and editing your comment to complain about it only makes the original comment worse, and leads to an off-topic subthread like this.
Sure, edit the comment if you need to, but try to make it clearer and more informative; that's what I do if I get an early downvote and realise the comment could have been better-worded.
To contribute, the PREEMPT_RT has been around as a set of patches(if not this specific one, then at least some version of making linux a realtime os.). If you need to know that something will run within a specified amount of time realtime matters.
EDIT: Don't use this and think it will help performance, generally it kills throughput.
And adding "offensive" every other sentence  assumes (for me at least - for others not that much) that you are talking about us as crybabies being easily offended.
I almost downvoted for positioning yourself as a purveyor of ethics with the implied assumption that everybody will agree with you.
But in general people around here indeed _are_ easily offended - see for example 'rzv' in this thread being downvoted for talking about this very problem. Depends of course on the thread and on the self-selected audience.
So - what can you learn from a downvote? For me is: don't debate here - HN it's not a platform suitable for debating ideas.
 not by _you_ - but in general
PG seems to understand that a community censorship mechanism needs safeguards. From http://www.paulgraham.com/hackernews.html :
> I think it's important that a site that kills submissions provide a way for users to see what got killed if they want to. That keeps editors honest, and just as importantly, makes users confident they'd know if the editors stopped being honest. HN users can do this by flipping a switch called showdead in their profile.
How can editors be "kept honest" if a bunch of their minions can just downvote users for no reason and any complaining is forbidden?
pg eventually came down on the side of Argument 2, so voting can be either "I disagree" or "you comment is not useful."
I do agree that adding a comment when downvoting is useful to the downvotee, but alas sometimes these come off as mean or supercilious and leave a bad tone in general, spoiling the conversation (or igniting a flame war).
Rather, I would like to ignore all downvotes made by particular downvoters because I believe their judgements are highly flawed, independent of the quality of the original comment that was downvoted.
https://abstrusegoose.com/527 is highly relevant.
What are the advantages of this as compared with NFS?
But for me, NFS has always been a colossal pain to use. The server has to run in kernel-space. Shares have to be enumerated in /etc. There are a couple of userspace options, but I've never been able to make them work reliably. Once I do get it working, it hangs all the time for no easy-to-debug reason. Also it needs multiple ports to be open, and it expects UID and GID to be the same on client and server.
CIFS has its problems for sure. But it's been pretty straightforward for me every time I've had to use it. If I was trying to set up a production-line machine to flash Linux-based devices, I'd choose CIFS every time because it's so much less hassle. And now that it's rootfs-capable, I just might be able to do it.
As a result, a CIFS root will default to SMB1 for now but the version
to use can nonetheless be changed via the 'vers=' mount option. This
default will change once the SMB3 POSIX extensions are fully
Who thought re-enabling uses of SMB1 was a good idea?
SMB1 has to be used any time you need the POSIX extensions, with Samba at the server side and Linux at the client side.
I find it comes up reasonably often, because Samba is so configurable. For example remapping user ids, or mapping user-group permission bits; these are hard or impossible to do in NFS, depending on available NFS server version.
A third way to do this with NFS is to forward the TCP connection over stunnel, ssh forwarding or other similar thing.
I like Kerberos a good bit and I think the complexity of running an LDAP/Kerberos infrastructure is greatly over estimated, but it is disappointing that none of the theorized alternatives ever really appeared. Last I read, LIPKEY was the only serious contender and there were some security concerns that got it nixed.
SMB isn't just for filesystems. It is also used for printing, among other things. CIFS is the filesystem.
Outside of Linux/Samba folks, neither term is popular. Users say "Windows share" or "shared drive" or "network folder" or something like that.
It's crazy that the whole system grinds to a near halt if it loses connection to the NFS server, from an end user perspective.
I look forward to some time in the future when Debian incorporates this kernel. I prefer to use stock kernels. I used to enjoy messing with my distros, but these days I prefer stability.
Indeed. The only solution that works consistently, if the NFS server is not coming back up anytime soon: Enable an NFS server on localhost; add the address of failed NFS server to a local adapter; wait for retry to finally get an error answer; kill ip address and local NFS server.
Client side caching will only delay the inevitable.
If you also want to cache on local disk, there's "cachefilesd", which does exactly that. You can specify a certain percentage of the disk that should be kept empty, and cachefilesd will use the rest of the available space for caching.
(It works very well, but is broken on kernel 5.x for me (it just doesn't read from the local cache, even though everything looks fine). But I just mention it off-hand, I don't have time to diagnose this in more detail, I just remain on 4.15 for the time being.)
And the daemon responsible for this would end up hard-locking the VPS it was running on.
Oh yeah, here's the issue I found that mirrored my experience: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=751933
That was a rough experience since it would run fine until it hit a certain level of load.
NFSv4 is actually quite nice. Anything earlier should be avoided.
Wow, never thought I would see this. Are you saying that based on experience or just looking at its feature set?
I used to work with NFS performance on NetApp filers at NetApp, and NFSv4(.1,.2) we’re god awful. I used to see almost 1/3 of v3 performance in simple read and write ops. On top of it I need to align all planet to get it to not fail with (EIO) during the test. Is that not the case anymore? Did somebody do some magic I missed in the last couple of years?
I don’t understand NFS well enough to know what to believe.
Theo's comments in particular seem to start at
"NFSv4 is a gigantic joke on everyone."
This isn't politically incorrect, it's just noise. Nothing is gained from this comment. Even Linus with his infamous "rants" would have actual commentary laced in, Theo is just being dismissive
After someone reaffirms that they would like to use it, he replies
"Hahahahaha. That's a good one."
"I guess by "all the other protocols" you must be rejecting all the rest
of your network traffic as "not protocols" or "not services"."
Again. Not helpful, just noise
So yeah, like you said--absolutely no useful information was conveyed in that thread. Besides any lurking bystanders learning those "in the group" are complete assholes.
So, since he wrote that in 2010, we can look forward this year to seeing if he was right.
And yeah, OpenBSD has been around forever.... but you'd be hard to call them anything more than a tiny little niche operating system. Probably because of how toxic they are.
why wouldn't AMD set aside a couple of engineers (or at least one) to make sure their processors are supported here? it's not like this would be a huge drain on their manpower resources and the company itself should have an immense interest in the best possible linux support.
I don’t get it, it seems like there are technical leaps but sloppy execution.
In the end I still trust intel (even with specter/meltdown etc) more than AMD because at least my machine:
b) stays up long enough to have me even worry about an attack
That should have been treated as an all-hands-on-deck emergency by AMD.
R5 1600, never had any of these problems, and I have friends running Threadrippers that don't see these problems either.
I wonder how many of these vulnerable systems will be upgraded to kernel 5.6?
Was there a use case for “we need to pretend to have 65 years worth of system time before 1970?
- Why in the hell would a date value-- particularly one that's an offset-- be signed?
- - How were you planning on indicating dates prior to 1970-01-01?
- - - What date before 1970? We just all assume those do not exist.
- - - - "Now you've made me feel old." "How old are you?" "Let me put it this way: when I was born, time didn't exist."
The full context is ... kind of fun:
We don't want time implementations based on some definition of big bang version 1.1.3 and all other versions as physicians change the definition of time since big bang.
Lunchtime doubly so.
The first epoch seconds computers aren’t running things anymore. But, it lives on as a standard.
So the issue isn’t “was it a good idea to count seconds starting at 1970”, but rather “why the hell would anyone use signed for a value that can only be positive? ... and then keep doing it once they realized the problem!?”
2105 or 2106 wont be an issue for unsigned. No one will use a 32bit standard much longer let alone in the 30 years leading up to the problem date.
Side take... I sized our servers to be adequate with the idea virtual cores won’t be turned on. Just assuming more specter and meltdown discoveries, and while AMD has fairer better it’s not impossible they have their own demons.
1. Our major cost in BOM is memory, not the CPU. So, a 30% savings in CPu cost is not 30% off the bill, but much less.
2. Even if we found a way to tip the scale in AMD’s favour, our binaries still need to run in the rest of the intel servers without significant perf hit. So, our liberty to change is limited.
It’s sad, but reality that we had to buy more intel. But, luckily, their prices are far lower than our last purchase before AMD lit a fire under their asses. So, there is that.
I assume the exploitable edge cases are so numerous and so hard to have 100% test coverage on (is it even possible?) that it is hard enough for Intel to deal with correct execution on their own platform.
In reality with AMD, the cost was better, the ram/bus was faster, I liked the specs and options of the 7xxx vs the 7xx series better.
They couldn’t give me a reason not to chose AMD, only that “Dell always uses Intel for a reason”
That reason sometimes being that Intel paid them off to block AMD from competing in the market:
1) You don't know if a given linux kernel/other software will work unless you test it ... for each future version
2) The firmware updates for Intel and AMD are different.
Additionally, the excellent Intel C compiler focuses on their own processors.
The above doesn't mean you can't choose AMD, but don't assume they're interchangeable CPUs.
Disclosure: I worked for Transmeta, whose entire DC was based an AMD servers. The reason was that Intel was a larger competitor for their code-morphing CPUs than AMD was.
Coincidentally, Linus Torvalds entered the USA on a work visa from Transmeta after DEC bailed on his job offer.
I bought CS22 at Transmeta's wind-down auction, which I will donate to the Computer Museum. Several large CPU designs during that era were verified on it because it was a 4 CPU Opteron with 64 GB RAM, and 32 GB RAM wasn't enough.
Aside from Apple's A-series, that was the end of Silicon Valley being about silicon. (Many of the chip engineers on my last project ended up at Apple on the A-series.)
This is a new and creative use of the word "excellent" to mean Intel are so dishonest they have been caught out using their compiler as malware delivery to make /your/ compiled binary test for an Intel cpu when being run by /your/ customer and if it finds your executable binary being run on a competitor, eg amd, makes the code run every slow path despite the optimised code running fast on that cpu.
Wildly dishonest. Malware delivery mechanism are somewhat more traditional uses of the English language to describe the Intel compiler.
You cannot trust Intel. They've earned that reputation all by themselves.
> malware (n)
> software that is specifically designed to disrupt, damage, or gain unauthorized access to a computer system.
How is a dispatch system (which GCC supports) malware? Yes, Intel “cripples” AMD by requiring an Intel processor, but it’s not malware.
If it disrupts, that fits the definition you gave.
Or do you think a trojan that deletes your boot sector isn't malware?
Huh? Sure, some software may break, but there's more than enough AMD out there to make sure that linux and other common software won't break.
> Additionally, the excellent Intel C compiler focuses on their own processors.
IME it's actually not that commonly used outside of benchmarking (among other reasons, it's fairly buggy - perhaps somehat of a chicken/egg issue).
I've been using Wayland out of the box on 19.04 and 19.10 to get fractional scaling and independent DPIs on multiple monitors (Thinkpads of various ages with Intel GPUs). If it's experimental, they've certainly hid that well. It was just a login option on the display manager with no warnings about it during install or later.
Hm, Wayland by default in 17.10, then back to optional in 18.04 - and so it might stay:
I'm a little surprised, not default for 18.04 made a lot of sense, but I'm not sure why 20.04 won't see a switch.
It is still a very slightly rougher experience than xorg - mainly due to some 3rd party apps not fully handling it yet. But the scaling options more than make up for it with me. One of those features (either fractional scaling or independent DPIs) was still regarded as experimental enough to require a CLI command to enable it though.
So, not perfect, but good enough for me.
Another side effect of Intel's market penetration is that the Intel implementation of any given featureset is targeted first. Things like nested virtualization may work mostly-OK on Intel by now but are still in their infancy on AMD; for example, it appears that MS still blacklists AMD from nested virtualization. 
You have to factor in how stagnant Intel's chips have been for many years. There's simply not much new stuff showing up on Intel platforms, and half of the new features are fundamentally incompatible with Linux anyways and thus will never lead to upstreamable patches. AMD catching up to Intel on feature support also necessarily means AMD is adding features at a faster rate that requires more feature enablement patches over the same time span.
Which is just as likely to be more to do with commercial arm twisting and incentives from Intel than anything technical.
If you compare the flexible version targeted at June, and the non-flexible version targeted at August, you'll find that they're making almost the exact same compromises.
Nothing ever stops you from using an earlier version if it's more stable. So both schedules get to chose from multiple versions. Maybe flexible chooses from 6 month old to 3 months in the future code, via delaying. But non-flexible can choose from 9 month old to 0 month old code. It works out the same, and the only difference is how you label it.
The reality of doing LTS, plus LTS .1 mean that the next standard release didn't get as much attention. So in reality the first six month release was often very rough, with a lot of new things in it that would be straightened out over the course of the entire cycle.
As someone mentions later in the thread Canonical started doing HWE releases, which means you can run a later kernel on the LTS. So I'm running 18.04.4 (I think) which has a HWE kernel that is newer than the one that originally shipped - not just security fixes but newer hardware compatibility.
Still have an issue with when I come back from the blank screen and login, the login screen won't go away... all the top/side bar shows, but it covers apps... I wind up ctrl+alt=f3 to login to a terminal and reboot. Should really figure out the commands to kill and restart gnome, but it's really been a pain. not sure if KDE might be better.
I have a newish all intel setup, and 5.4 crashes to the point of insanity. I gave up and installed 5.5 and it went away. It's a serious issue for an LTS release.
And I say this as an Arch fan who would rather avoid Ubuntu...
Kind of a double-edged sword that keeps the discussion on target.
I like progressiveness a lot and also new features, etc. and I found a workaround (switched off 5Ghz at home), so it's ok and I can always install an older kernel too which is the freedom of free software, I wasn't able to do stuff like that on OS X.
Arch on a dell xps, fine for two years and now have some issue where network buffer doesn't flush (cursory investigation). Have been fairly sure it's an issue post update. Thanks for the idea :)
Anyone tried it out yet, how does it look so far?
I'll be putting this kernel on all production systems, without testing, on Sunday night while everyone is off.
A bit sad about this one, Intel should talk to Oracle, ARM and Cambridge Computer Laboratory on how to implement this kind of feature properly.
However, there's not much that is of interest to me. Given that, plus the mammoth amount of changes in the kernel, I think that I'll be delaying updating to this kernel for a good, long time. Just in case.
Perhaps the mindset of web software and operating system kernels don’t overlap enough for this to be reasonable?
It started with a very small subset of syscalls and they are now adding new ones at every release at every stage.
Take bpf. It was meant for networking, and it is slowly transforming into a brain new kind of OS.
Even before a feature lands in mainlinel, we're far from waterfall. Usually a developer will issue a RFC thread implementing a feature as quick and dirty, but as a basis to discuss, and to have people start to test and tinker with it to see what can be done with it.
New stable kernel branches are released every 2-3 months after weekly release candidates, with bugfix releases in between new stable branches and a new LTS branch every year or so. If you want anything "more agile" than that for OS kernel development, then you're probably prioritizing agile dogma over the realities of trying to not break the most fundamental component of the operating system.
Actual far-reaching systemic changes are a pretty small portion of this and most other stable kernel releases.