
Linux 4.6 is out - tshtf
https://lwn.net/Articles/687511/
======
ajdlinux
A good summary of interesting new things in this release can be found at
[http://kernelnewbies.org/Linux_4.6](http://kernelnewbies.org/Linux_4.6)

~~~
dominotw
>Support for cgroup namespaces This release adds support for cgroup
namespaces, which provides a mechanism to virtualize the view of the
/proc/$PID/cgroup file and cgroup mounts.

What are the advantages and practical applications of this ?

documentation here :
[https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d4021f6cd41f03017f831b3d40b0067bed54893d)

~~~
wyldfire
Boy, I hope the memory constraints for cgroups are getting better. I'm stuck
on an ancient platform where cgroups are relatively new-ish (SuSE Linux
Enterprise 11). We hit a huge anti-feature where fs cache eviction from a
cgroup-memory-constrained process happened in the foreground and caused
astonishing latency. Sounds like it was the intended design. Hopefully it's
evolving to no longer be limited like that.

~~~
teraflop
I'm not sure what you're referring to as an "anti-feature". By design, if a
process requests memory when it's already at its limit, the kernel will wait
until more memory is available -- which could require a writeback, if the
contents of the pagecache are dirty.

Would you have preferred for the kernel to instead allow the process to exceed
its memory limit, even if that means evicting something else outside the
container? If so, why not just set the limit higher?

Apologies if I'm misunderstanding something.

~~~
wyldfire
Sorry if some of this is already clear but I'll try and back up a bit and give
more context.

Linux uses greedy caching for filesystems and it's usually pretty awesome. But
by merely opting in to be in a memory-constrained cgroup, it is (was?) no
longer able to reap those pages in the background. Perhaps it's because the
kernel daemons that would ordinarily do that don't have the right context. Or
in order to have it they'd need to have one instance for each cgroup
dir/context.

Effectively what we found was that opting out of cgroups memory constraints
meant that we would still cause the filesystem cache to get evicted in order
to make room for our memory allocations but with significantly lower latency.
I suspect that what might have been happening is that when cgroup-memory-
bound, we faulted in a disk-backed page and that triggered logic that decided
it needed to evict other pages in order to make room for this one. So far, so
good. But then perhaps it got opportunistic and figured "since I can't know
that I need to background-evict these pages I had better foreground-evict as
many as I can now." This is all wild speculation on my part. But what was
clear was that our process would suffer a multi-second stall while evicting
pages. Disabling cgroups-memory constraints meant that we would not hit this
same condition.

More background: our application would read mapped files into memory,
transform the data then write it out over sockets. The file-reading-task would
read files whose size exceeds the total system memory. All files were opened
read-only, the pages were never dirtied. There were no swap devices.

I don't recall all the specifics but the gist we got from the distro was that
"yeah, it works that way and it's not a bug."

All I wanted to do was make it so that the greedy process could eat up no more
than half of system memory with stuff like the fscache or dumb allocations.
That would mean that other processes would be much less likely to stall when
doing memory allocations.

------
wyldfire
> This release adds Kernel Connection Multiplexor (KCM), a facility that
> provides a message based interface over TCP for accelerating application
> layer protocols. ... a common use case is to implement a framed application
> layer protocol running over TCP ...

This was discussed on HN a while back. Gee, it's too bad that SCTP hasn't
found more popularity. Reliable datagram-based (or stream-based) messaging,
with support for binding to multiple endpoints.

In any case, KCM sounds like it's worth exploring.

~~~
saurabhjha
I heard that BSD is really good with network stuff with their kqueue. See for
example this comparison
[http://www.eecs.berkeley.edu/~sangjin/2012/12/21/epoll-vs-
kq...](http://www.eecs.berkeley.edu/~sangjin/2012/12/21/epoll-vs-kqueue.html)

In particular, epoll cannot be used with disks.

Is it still the case that BSD has an edge over Linux? Is KCM addressing the
problem of handling a lot of hanging connections?

Please point out if my assumptions are incorrect. I am new to kernel and
network programming.

Edit: By BSD I mean FreeBSD

~~~
pritambaral
epoll alone cannot be used with disks, but I think aio supports
getting/setting eventfds so it can be used with epoll.

I don't know it well enough, perhaps someone more knowledgeable than I can
chime in.

------
neverminder
USB 3.1 SuperSpeedPlus (10Gbps) support - this is by far the most important
feature in my opinion, considering that USB Type-C (3.1) is projected to be
the most abundant socket on the planet.

~~~
cm3
I wish they would finally find a solution that actually fixes USB storage sync
lockups. Whatever USB storage you use and whether USB2 or USB3, it's very easy
to lock up a machine when data is flushed and is slow to do so. Given all the
asynchronicity in the kernel, it's surprising there's such a Windows-like
global wait-until-I'm-finished unresponsiveness bug.

~~~
_wmd
I've never experienced the entire machine locking up before (going back 10+
years), only IO becoming severely locked up when writeback can't keep up with
new IO, and always happened on non-USB media too.

~~~
cm3
Xorg becomes totally unresponsive and I cannot switch to a VT, so I assume
it's a global lock somewhere.

~~~
fit2rule
while sleep 10 do ; cat /proc/meminfo | grep Dirty ; done

Watch the dirty cache data when this is happening, if you want to know for
sure. Put a couple "sync;sync;sync"'s &&'ed on that cat, if you want to watch
stimulating behaviour..

(More often than not, its the device. Linux is dealing with it fine.)

~~~
cm3
As I wrote above, the device is taking long to finish the flush operation, and
that's nothing special, but making the machine not react to input and actually
block everything else like, say, the network stack and thereby interrupt
downloads, is not what I perceive as fine.

~~~
johnp_
Which scheduler are you using on the USB device? Did you try deadline or BFQ?

There is a long standing (CFQ-related) kernel issue causing system lockups in
high I/O situations:

[https://bugzilla.kernel.org/show_bug.cgi?id=12309](https://bugzilla.kernel.org/show_bug.cgi?id=12309)

~~~
cm3
Impossible to understand why BFQ isn't available as an option, when there are
competing options for many things in the upstream kernel already.

~~~
digi_owl
The most likely reason is a clash of personalities.

~~~
cm3
Linus manages the kernel and should decide to include it as BFQ is just
another option in a pluggable subsytem. BFQ is arguably more important than
many things because it fixes bad performance and unresponsive behavior. I hope
it's not that Jens Axboe has too much influence because he's the block layer
maintainer.

------
meeper16
I can't access the invitation to the church of the subgenius in the
README.txt's anymore. What's going on?

~~~
gcr
Would you mind expanding upon this a little? I feel like there's a wonderful
nugget of history here just waiting to be rediscovered by the broader hacker
news community. Sounds like it could make a good story?

~~~
Sanddancer
To add to the other posters, Slackware also included other Subgenius
recruiting material in the source tree in /ap/gonzo. They stopped doing so for
slackware 8.1, sadly. A link to the last version's material

[http://slackware.cs.utah.edu/pub/slackware/slackware-8.0/sou...](http://slackware.cs.utah.edu/pub/slackware/slackware-8.0/source/ap/gonzo/)

~~~
tanderson92
They used to put a Dobbshead on the CDs, way back in the day. I remember my
dad's Slackware CDs had the photos on them; I always thought they looked like
Mr. Cleaver, naturally, since I watched that show as a kid.

------
frozenport
[http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.6...](http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.6-Kernel-
Features)

For a summary of new features.

------
known
I'm now compiling it with optimized HOSTCFLAGS

------
woybi
Shouldn't we say "GNU/Linux" instead of "Linux" alone?

~~~
longwave
No, because this is only about the kernel and nothing to do with the GNU
userspace.

~~~
woybi
You're right, thanks

------
known
I've downloaded and compiled it; It's working fine;

------
bryanmathew
As a Hacking learner, Linux is really complex to hack. That's why i love Linus
OS.

------
thought_alarm
Look at this. Hacker News beats Slashdot to the punch with a Linux kernel
update thread.

~~~
sheraz
I get this reference.

------
meeper16
It's actually amazing how many soft devs have no clue about the impact of the
Linux, the command line, which they now call the 'CLI', true editors and fast
home grown IDEs. A shift needs to take place at the core of software
development. Linux made everything possible.

~~~
karmajunkie
Ahh yeah, it was all Linux. The 40 years worth of computing history and
foundational operating systems before Linus came on the scene had nothing to
do with it.

~~~
imglorp
Sarcasm noted, but think back to 1991. We had a half dozen mostly incompatible
desktop OS's and around a dozen Unix-like variants (mac, pc, amiga, cpm, etc),
mostly with proprietary hardware (hpux, aix, sun, apollo, dg, dec, etc). The
internet and decent interop protocols were just getting started. Remember DCE
and RCP? Bleh).

Linux now runs on almost _all_ of those with a mostly compatible API and ABI,
from raspi to Z-mainframe, speaks to hundreds of weird hardwares, and hundreds
more network protocols built in.

I'd say it was instrumental.

~~~
lmm
So does BSD though. To my mind it's accidents of history (in particular the
AT&T lawsuit) that lead to Linux being the thing, and if it hadn't been Linux
it would've been something else. (Indeed I believe Torvalds said that if
386BSD had been going around Helsinki at the time there would never have been
a Linux).

~~~
ultraballer
True. If it wasn't for the license war between AT&T and BSD we would live in a
parallel universe where everyone is running FreeBSD everywhere.

Kind of cool how randomness creates forks in time.

~~~
ccozan
Absolutely true about the war that lead to the Linux mainstreaming.

And btw, is there a list of such points in computing history, where a slightly
small change in a decision would have lead to a different computing
environment ( i.e. "forks in time") ? This would be a very good read. And a
nice practical introduction of chaos theory.

~~~
mamon
One such moment was when IBM decided to sign the deal with Microsoft, giving
them exclusive license on operating systems for IBM PC. As someone put it:
Were there one person with a brain at IBM to proof-read that deal, Microsoft
would never become such a big company as it is today. But there wasn't any, so
we ended up with Windows at 95% of PCs worldwide.

