What are the advantages and practical applications of this ?
documentation here : https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....
It's not entirely useful at the moment to be honest, since all it does is mask /proc/self/cgroup. But it could be extended later to also virtualise /proc/meminfo and /proc/cpuinfo to reflect the cgroup limits.
I'm currently trying to get a patch merged into cgroup core to allow for processes in a cgroup namespace to manage their own cgroups (without compromising the hierarchy limits). Hopefully it gets into 4.7. This will be exceptionally useful for cgroup hierarchies.
Would you have preferred for the kernel to instead allow the process to exceed its memory limit, even if that means evicting something else outside the container? If so, why not just set the limit higher?
Apologies if I'm misunderstanding something.
Linux uses greedy caching for filesystems and it's usually pretty awesome. But by merely opting in to be in a memory-constrained cgroup, it is (was?) no longer able to reap those pages in the background. Perhaps it's because the kernel daemons that would ordinarily do that don't have the right context. Or in order to have it they'd need to have one instance for each cgroup dir/context.
Effectively what we found was that opting out of cgroups memory constraints meant that we would still cause the filesystem cache to get evicted in order to make room for our memory allocations but with significantly lower latency. I suspect that what might have been happening is that when cgroup-memory-bound, we faulted in a disk-backed page and that triggered logic that decided it needed to evict other pages in order to make room for this one. So far, so good. But then perhaps it got opportunistic and figured "since I can't know that I need to background-evict these pages I had better foreground-evict as many as I can now." This is all wild speculation on my part. But what was clear was that our process would suffer a multi-second stall while evicting pages. Disabling cgroups-memory constraints meant that we would not hit this same condition.
More background: our application would read mapped files into memory, transform the data then write it out over sockets. The file-reading-task would read files whose size exceeds the total system memory. All files were opened read-only, the pages were never dirtied. There were no swap devices.
I don't recall all the specifics but the gist we got from the distro was that "yeah, it works that way and it's not a bug."
All I wanted to do was make it so that the greedy process could eat up no more than half of system memory with stuff like the fscache or dumb allocations. That would mean that other processes would be much less likely to stall when doing memory allocations.
This because now a would be attacker can't as easily tell if they are inside a container or not after exploiting some daemon vulnerability.
The barriers to enabling the magic plan9 has are more along the lines of making it feasible for unprivileged users to mount and create namespaces themselves, which is complicated (afaik) largely by the way privilege escalation happens in unix systems.
This just looks like it prevents some information from leaking out about the host system from inside a cgroup by inspecting the /proc/PID/cgroup file.
I'd also dispute the suggestion that Plan9 is more advanced than Linux. It's different and Plan9 definitely does some interesting things, but that doesn't automatically make it more advanced. Ignoring that "advanced" is hard to quantify, I'd more that Plan9 is simply based on different concepts. It's like comparing Windows and Linux, albeit maybe not quite so extreme as Plan9 and Linux have more in common and Plan9 is simply not as heavily developed as either OS.
This was discussed on HN a while back. Gee, it's too bad that SCTP hasn't found more popularity. Reliable datagram-based (or stream-based) messaging, with support for binding to multiple endpoints.
In any case, KCM sounds like it's worth exploring.
In particular, epoll cannot be used with disks.
Is it still the case that BSD has an edge over Linux? Is KCM addressing the problem of handling a lot of hanging connections?
Please point out if my assumptions are incorrect. I am new to kernel and network programming.
Edit: By BSD I mean FreeBSD
I don't know it well enough, perhaps someone more knowledgeable than I can chime in.
KCM seems really neat, though of course now we need a library that abstracts it away and has a fallback to regular TCP so that applications don't become Linux-only.
here is the previous (spartan !) discussion: https://news.ycombinator.com/item?id=10310090
Watch the dirty cache data when this is happening, if you want to know for sure. Put a couple "sync;sync;sync"'s &&'ed on that cat, if you want to watch stimulating behaviour..
(More often than not, its the device. Linux is dealing with it fine.)
There is a long standing (CFQ-related) kernel issue causing system lockups in high I/O situations:
With blk-mq + nvme and later lightnvm that whole IO-scheduler mess will hopefully go away over the course of the next couple years (at least for personal computers).
Edit: never mind, Johnp_'s comment and bugzilla link is highly interesting.
It feels like there was too many cooks involved this time round, trying to make the USB spec go in multiple directions at once.
Where before port and speeds were joined in one spec, now you have no less than 3 specs that can be mixed as the device OEM sees fit.
First is the 3.1 data spec, that can work on both A and C plugs.
then you have the C plug spec, that as you point out do not require 3.1 data.
Then there is the Power Delivery spec, that is all about turning USB into a generic power delivery cable. And that again can be used with both A and C plugs, and with any and all data speeds.
For a summary of new features.
Linux now runs on almost _all_ of those with a mostly compatible API and ABI, from raspi to Z-mainframe, speaks to hundreds of weird hardwares, and hundreds more network protocols built in.
I'd say it was instrumental.
Kind of cool how randomness creates forks in time.
And btw, is there a list of such points in computing history, where a slightly small change in a decision would have lead to a different computing environment ( i.e. "forks in time") ? This would be a very good read. And a nice practical introduction of chaos theory.
You go on to praise Windows, as if it's okay to ignore history when it comes to MS.
Eh... I think people in HN get overly offended when people state the fact that Linux has been historically revolutionary.
That's disingenuous, not every market has the same weight. Microsoft owns the desktop market, and there are many countries where it also owns the enterprise/server market.
> people in HN get overly offended when people state the fact that Linux has been historically revolutionary
There is a big difference between saying "Linux was revolutionary" (considering the impact it had, absolutely true), and "Linux was revolutionary from a technical standpoint" (it wasn't).
AS/400. VMS. Plan9. Amiga. QNX. Even NT. All of these date to before Linux was created. (Strictly speaking, I believe Plan9's first public release was after Linux's, but work on Plan9 started at Bell in the mid 80s.)
VMS and NT handle async IO much better. QNX, and to an extent NT, are more reliable¹ by design (as microkernels²). AS/400 and Plan9 are based on more powerful and coherent abstractions ("everything is an object" and "everything is a file" respectively).
¹The NT kernel really is quite reliable, it's everything else that gives Windows a bad reputation for reliability.
²NT isn't strictly a microkernel, but it's very microkernel-ish.
Ok.. reluctantly, I'll bite.
Why does a shift "need" to take place? What do you believe is broken and how is a developer "understanding the impact" of Linux going to fix it?
Also, what is a "true editor" and conversely what makes an editor not "true"? Do you have examples? Can software development only be done in a "true" editor?
OS X also experiences about a 2-fold increase whereas windows experiences a large decrease in use amongst developers compared to its general desktop usage. I think Windows should just switch to the Linux kernel and build Windows container to run legacy programs and port all their software to Linux if they want to win over devs. The "bash on Windows" doesn't appeal to me at all.
Of course, here on hacker news the culture is really more corporate than hacker.
People forget that Linux is the dominant
kernel/os because it dominates nearly every
market that isn't the desktop
And people forget that there are other very popular markets using other toolsets.
So, when you say:
here on hacker news the culture is really
more corporate than hacker.
0 - http://www.theregister.co.uk/2013/11/16/sony_playstation_4_k...
1 - https://wiki.freebsd.org/Myths
Yeah but *BSD isn't nearly as wide
spread as Linux.
This is mostly correct while referring to the kernel. Nevertheless, a kernel by itself is just one component of the low level system. And I personally definitely do not consider the Android experience a "GNU/Linux" one - even the base libc is replaced with a quite broken (e.g wrt POSIX) Bionic. This is leaving aside all the firmware issues, lack of administrator control without "rooting", etc.
So in other words: it is natural to "forget" that Linux is widespread on mobile devices, simply because the experience is extremely different.
"CLI" stands for "command line interface" and has been a very standard abbreviation for a very long time...