
Improved default settings for Linux machines - dctrwatson
http://tobert.github.io/post/2014-06-24-linux-defaults.html
======
kev009
Sorry but this is really bad 'default' advice. Shocking, but defaults often
are default for a reason. Cranking everything up to 11 is a sign of ignorance
in which case you need to step back and understand what you are doing first.

The mmap, file-max, and SHM advice is application dependent. Understand what
your system is doing, and only increase if necessary. i.e. PostgreSQL < 9.3 is
the only large user of SHM I can think of off hand.

The limits.conf advice is also bad. You should have a safety net here and
increase these as needed and per user in /etc/security/limits.d

A less harmful guide would be something like "these are the knobs you may need
to turn for certain apps, and here is the documentation on what they affect"
\- this looks a bit better
[https://wiki.archlinux.org/index.php/sysctl](https://wiki.archlinux.org/index.php/sysctl)

~~~
colanderman
Ya, this reminds me of car modders who suggest stupid things like drilling
holes in your air intake. Any variable whose change (1) is cheap or free, (2)
doesn't violate some emissions standard or whatnot, and (3) doesn't cut into
profit of a pricier model, is practically _guaranteed_ to be set by the
manufacturer at its optimal value for the vehicle's intended use. They're not
morons and they have an interest in maximizing the car's utility.

Sysctl variables in Linux meet all these same criteria. Linux is already tuned
for general use (or is damn close to it). Any knob you tweak is likely to make
things worse in a way you don't understand. Just leave things unless you have
a specific use case that requires different tuning.

~~~
zorbo
> is practically guaranteed to be set by the manufacturer at its optimal value

The optimal value for what? Speed? Performance? Comfort? Economics? I don't
drive a car, but wouldn't trade-offs apply to car tuning just like it does to
everything else?

~~~
colanderman
You clipped my sentence. "For its intended use", I said. Sporty cars are tuned
for performance. Cushy cars are tuned for comfort. Econoboxes are tuned for
economy. Of course if you want to make your Yaris do 0-60 in 6.0 s there are
changes you can make. But drilling random holes on your STi ain't gonna do
shit for its time on the track.

~~~
Cthulhu_
Sporty cares are tuned for performance, but the really high-end ones are often
speed limited, too; I guess some of these settings are the equivalent of
removing / turning off the speed limiter, if you know what you're doing / can
drive on a proverbial track where you have need for those speeds.

~~~
coldpie
That's exactly his point. Those speed limiters aren't there to arbitrarily
limit your fun, they're there because the stock wheels and tires can't handle
greater speeds. If you don't understand why the limit is there in the first
place, you're going to have a nasty surprise when you exceed the tires' limits
while going well north of 150 MPH.

------
slyall
Redhat and Centos have the command "tuned-adm" which has various machine
profiles with settings like this. It is an official thing supported by the
vendor.

[https://access.redhat.com/site/documentation/en-
US/Red_Hat_E...](https://access.redhat.com/site/documentation/en-
US/Red_Hat_Enterprise_Linux/6/html/Power_Management_Guide/tuned-adm.html)

eg when I run it on one of our KVM hosts.

    
    
      $ tuned-adm list
      Available profiles:
      - throughput-performance
      - laptop-ac-powersave
      - virtual-guest
      - latency-performance
      - enterprise-storage
      - default
      - spindown-disk
      - desktop-powersave
      - virtual-host
      - laptop-battery-powersave
      - server-powersave
      Current active profile: virtual-host

~~~
Firefishy
The git repo is here:
[https://git.fedorahosted.org/cgit/tuned.git/tree/profiles](https://git.fedorahosted.org/cgit/tuned.git/tree/profiles)

Easy enough to implement own scripts using the profiles as a basis.

~~~
kolev
This is a very clean approach preventing people from reinventing the wheel. I
just found this, although not recently updated, which is tuned ported to
Ubuntu: [https://github.com/edwardbadboy/tuned-
ubuntu](https://github.com/edwardbadboy/tuned-ubuntu)

------
colanderman
"this disables swap entirely, which I think is virtuous" \-- sigh.

Swap isn't some artifact from the days of 640k, used only because memory is
expensive. Shit is _always_ stored on disk; swap just allows that shit to be
unused pages of active programs rather than actively used pages of files on
disk.

Without swap, you force the kernel to prioritize cold code paths of rarely
used daemons over, say, your web browser's cache. That's just dumb.

~~~
Jam0864
How much swap space should you allocate for e.g. 16GB RAM?

Every reference I've read suggests there's no need for swap space once you
have more than ~2GB of RAM, but I find that extremely hard to believe.

~~~
pizza234
I find this an interesting, common, conceptual misunderstanding.

When somebody asks this question, he always thinks this way:

\- I have a system "A" with x GB of ram, with y GB of swap. \- I have a system
"B" with x+y GB of ram, and no swap, because it has all the virtual memory A
is using.

well, the problem is that one should not compare system B with system A, he
should compare system B with system C:

\- I have a system "C", with x+y GB of ram, and z of swap.

system C will perform potentially better than B.

The generic explanation is that the kernel may decide that it's better to swap
out some data, and use the space for caching purposes. This is a concept, that
within limits, it's not related to the amount of RAM.

~~~
edwintorok
As long as 'z' is not proportional to x or y that might be fine. If you
consider 'z' as a percentage of your RAM then you'll notice that your
expensive server with lots of RAM is way slower than a cheap one with small
RAM and smaller swap, just because it takes less time for the kernel to fill
the swap and finally kill the offending process.

------
lazyant
I'm very much against changing kernel settings in production servers without
really understanding the implications. Take for example the "swappiness = 0",
most likely what you think it does it's not what it does.

~~~
guidedlight
Isn't "swappiness = 0" recommended for SSD-based swap partitions to reduce SSD
wear?!?

~~~
simoncion
Relatively modern SSDs will write at least 500TB before dying:

[http://techreport.com/review/26523/the-ssd-endurance-
experim...](http://techreport.com/review/26523/the-ssd-endurance-experiment-
casualties-on-the-way-to-a-petabyte)

I've an SSD that's been running in my development Linux laptop for very, very
close to three years. The drive houses a couple of encrypted swap partitions
along with the rest of the system. According to the SMART attributes, I've
written 18TB to it in that time.

Don't worry about SSD wear. Really, don't. Either you'll get a drive that
succumbs to crib death or super-shitty v1.0 firmware, or you'll get a drive
that will last until _long_ after you outgrow it.

------
mediaserf
Swappiness is not just about swapping.
[http://www.linuxjournal.com/article/10678](http://www.linuxjournal.com/article/10678)
<\- This is a great article on Linux swap and how it works. It will change
your life.

------
suprjami
Thanks so much, random guy on the internet.

I can't wait to see these settings cargo-culted onto systems of customers who
then complain Linux doesn't behave the way they expect it to.

Next time, keep your sysctls to yourself.

~~~
adobriyan
Note, vm.overcommit_memory is not even mentioned.

How disappointing.

------
rlpb
Doing one of these things may introduce a security vulnerability, depending on
the rest of your environment.

Some programs that use select(2) are known to assume that FD_SETSIZE is at
least the maximum number of file descriptors available (instead of checking
FD_SETSIZE). This lack of bounds checking may lead to a stack or heap overflow
and a security vulnerability.

More recently, if you build with fortified glibc options, then you'll get
automatic bounds checking, but do you know that your own daemons are built
this way?

This is an example of why it's not a good idea to arbitrarily change a list of
default settings system-wide without understanding the implications. The
defaults have not been changed for a reason; otherwise distributions would
already ship with these changes.

References: [https://lists.ubuntu.com/archives/ubuntu-
devel/2010-Septembe...](https://lists.ubuntu.com/archives/ubuntu-
devel/2010-September/031446.html)
[http://www.outflux.net/blog/archives/2014/06/13/5-year-
old-g...](http://www.outflux.net/blog/archives/2014/06/13/5-year-old-glibc-
select-weakness-fixed/)

------
hassy
Increasing the number of file descriptors on machines running HTTP servers is
one of the first recommendations I make to my consulting clients.

It's much easier to overlook than you'd probably imagine. I have seen apps
serving hundreds of thousands of API requests per day that had the default
settings. It's one of those quick changes that can have a big impact.

------
jebblue
>> Edit: I've run across a few comments complaining about these large max
values. The reason I set them high is that the machines I work on are not
multi-user in any way.

Then why is this posted at all, this isn't improved default Linux settings,
it's settings some guy likes for some customized environment.

------
Freaky
> # allow up to 999999 processes with corresponding pids

This is now the default in DragonFlyBSD:
[http://freshbsd.org/commit/dfbsd/3a877e444fff816b8a340d35fe3...](http://freshbsd.org/commit/dfbsd/3a877e444fff816b8a340d35fe32692d81753695)

------
angry_octet
An even better idea for developers is to reduce limits (memory, PIDs, file
handles) and start triggering those rarely-used (or non-existent) error
handling code paths.

Also, I think I would prefer process sbrk failure to OOM killer activation. So
setting vm/overcommit_memory=2, overcommit ratio to 80%, a decent swap size,
and code actually handling errors. IE consistency versus randomness.

Not that randomness is bad for testing, cf Chaos Monkey:
[https://github.com/Netflix/SimianArmy/wiki/Chaos-
Monkey](https://github.com/Netflix/SimianArmy/wiki/Chaos-Monkey)

~~~
mje__
Agreed. The last thing you want is to have to debug those weird "it worked on
my desktop" issues that come from having a really weirdly configured system

------
markhahn
wow, wrong on almost all the settings. seriously: maximize bufferbloat, cause
huge intrusive IO pauses, keep useless pages in memory, etc? the only good
ones there are kernel.panic = 300 kernel.sysrq = 1

------
alex_duf
Don't change max values unless it's really needed. Not every production
machine needs billions of IPC handles.

My philosophy is keep it do default unless you have an issue. Guess what ? It
works just fine.

~~~
AlTobey
Increasing the max setting does not consume additional resources. It merely
makes it possible for applications to get the resources they ask for.

For example, just this morning Chromium started failing because I hand't
disabled limits on one of my machines. I pulled down my standard settings,
applied them, then the problem went away. It wont' be coming back either.

~~~
adobriyan
> Increasing the max setting does not consume additional resources. It merely
> makes it possible for applications to get the resources they ask for.

From unswappable kernel memory, yeah.

> kernel.pid_max = 999999

> * - nproc unlimited

1.000.000 pids is 1 mil task_struct's.

On my quite stripped out kernel 14 task_struct's fit into order 3 slab -- 14
objects per 32KB or kernel memory.

1000000 / 14 * 32 * 1024 = 2.18 GB of kernel memory

and that's not even counting other kernel structures!

------
KaiserPro
It would be good if you were to explain your reasoning for your changes.

As alluded to before, defaults are default for a reason. Having someone
explain _why_ they change them is a good exercise for both reader and author.

for example fiddling with swappyness means that you'll end up with less RAM
for important things, like file cache.

~~~
AlTobey
I've added some notes explaining my reasoning. I hope that helps. I'll dig in
and explain some of the settings more thoroughly in the future.

------
singlow
Interesting, but I tried them out in my Ubuntu 13.10 desktop and applying the
whole lot completely killed Chrome. Strangely, certain tabs would consistently
load fine, while others would end up with white screens - seemingly
consistently per-uri over several browser launches and reboot. Other thing
seemed to be working fine. Oddly, the settings screen was one uri that did not
work. I took his file handle limits and scrapped the rest and it went back to
normal. Something in there did not agree with my video card settings is my
best bet.

~~~
velodrome
I have the same issue.

If I use the --disable-gpu flag, I will hit the file handle limits. I have
increased the limits and it works fine now.

I have an AMD GPU and chrome/chromium just does not work. It will constantly
flicker.

------
nwmcsween
Well here are my 'improved' settings for sysctl.conf
[http://sprunge.us/dhgM](http://sprunge.us/dhgM). Most of the TCP stuff is to
guard against server resource exhaustion by syn flood, etc, the vm settings
are optimized for hot cache (vs cold cache but programs) and spinning media
(page cluster).

------
jccooper
To apply the sysctl changes right away:

sudo sysctl -p /etc/sysctl.conf

My oldish kernel doesn't recognize the PID settings, which is unfortunate.

~~~
AlTobey
Wow, how old is that kernel? I've been using that since at least the 2.6.32
era.

~~~
jccooper
I think I was on 2.6.38. Seems like it ought to work, but didn't. Dunno.

------
PointerReaper
OMG! I can improve the audio 200% by simply setting: pactl set-sink-volume
alsa_output.pci-0000_00_1b.0.analog-stereo 200%

Teh sound is so much more sound-ier! Way much more cranked up than the lame
defaults! [http://goo.gl/TJLTMF](http://goo.gl/TJLTMF)

