
Linux 3.7 released - moonboots
http://kernelnewbies.org/Linux_3.7
======
aartur
Optimization of file deletion in ext4 gives nice results (from a commit
message):

> X86 before (linux 3.6-rc4):

> # time rm -f test1

> real 0m2.710s

> user 0m0.000s

> sys 0m1.530s

> X86 after:

> # time rm -f test1

> real 0m0.644s

> user 0m0.003s

> sys 0m0.060s

The commit affects 5 lines only.

EDIT. Not sure if this optimization applies to filesystems mounted with
standard journaling options...

~~~
unwind
Sounds nice, good find!

I dug up the commit:
[http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git...](http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commit;h=18888cf0883c286f238d44ee565530fe82752f06)
which seems to be a bit older than I would have expected. Of course, this
being Linux, "old" means it's from September 19th.

~~~
caf
The merge window for 3.7 opened on September 30th, so it's not _that_ old.

------
autotravis
>'"Fast Open" is a optimization to the process of stablishing a TCP connection
that allows the elimination of one round time trip from certain kinds of TCP
conversations. Fast Open could result in speed improvements of between 4% and
41% in the page load times on popular web sites.'

Interesting... anyone know more about this?

~~~
ck2
<http://en.wikipedia.org/wiki/TCP_Fast_Open>

I guess CentOS will get this in 2020, sigh, still waiting for initrwnd 10
support.

~~~
sliverstorm
Ah, the downsides to stability.

The particularly unfortunate part is, as much of the world's commercial
hosting runs on RHEL, it's the world that doesn't get this until 2020. Well,
except for services that roll custom kernels.

~~~
ck2
Yup - I realize it's the stability in trade.

Every so often I am tempted to fiddle with debian, looks like more and more
people are thinking that way:

[http://w3techs.com/blog/entry/debian_is_now_the_most_popular...](http://w3techs.com/blog/entry/debian_is_now_the_most_popular_linux_distribution_on_web_servers)

~~~
Scramblejams
I highly recommend making the switch. They've been doing the package
management thing the longest, and it shows, both in package selection and how
smoothly it all generally runs. And I've never had less trouble upgrading from
one version to the next than on Debian.

~~~
sliverstorm
On the other hand, one of the nice things about RHEL is you hardly ever have
to upgrade major version... RHEL4 has been around since 2005, and is only now
finally being phased out.

In comparison, I think "sarge" was the stable Debian in 2005. "sarge" is no
longer oldstable- nor is etch. oldstable is "lenny", released in 2009.

Don't get me wrong, I like Debian, but when you compare "ease of upgrading",
don't forget to consider "need to upgrade"!

~~~
wladimir
True, but don't forget that the longer you wait with upgrading, the more
compatibility issues you may run into when you finally _have_ to upgrade.
Linux has changed a lot since 2005.

~~~
Legion
Agreed with this!

I'm more devops than sysadmin, so more expertise may have changed my
experience, but when I did run a CentOS server, I thought, "yay, this is
great"... until it came time to actually update.

Then, I realized that I had simply been accumulating up all of my upgrade pain
as debt - with compound interest.

By comparison, upgrading Debian servers from version to version felt like
hopping from bed to bed in a mattress store. Aside from a missed footing or
two, it was usually a nice soft landing.

~~~
Scramblejams
Yes! Plus I get an unreasonable amount of satisfaction out of that server I
last imaged in 1998 running the latest version of Debian. So awesome -- it
might outlive me, and if it does, it'll be up to date!

------
jey

      perf trace will show the events associated with the
      target, initially syscalls, but other system events like 
      pagefaults, task lifetime events, scheduling events, etc.
    

I'm excited.

~~~
tocomment
Can you explain this a bit more? It sounds interesting.

~~~
VMG
This allows developers to see what their programs are doing in more detail.
They can then use that information to optimize their code.

~~~
tocomment
There's nothing similar in older linuxes?

~~~
deweerdt
There is, strace does just that. perf trace is still evolving though and
appears promising. See, for example, scripting support:
<http://lwn.net/Articles/371448/>

------
darkstalker
I think this is important: "JFS: TRIM support". This just added another choice
of filesystem for SSD drives.

------
rgbrgb
How long does this stuff usually take to get into a more comercial release?
When will we see it in Ubuntu? Android?

~~~
vladev
Arch Linux will probably get it in a couple of weeks.

Android, on the other hand, will take much longer. It's sad as there are so
many goodies in this for ARM, especially the multi-platform support. This will
make updating Android version a lot easier (for manufacturers, hackers). Right
now the one of the bigger issues is updating the kernels (and the closed
drivers, to be honest).

~~~
thevdude
:( I just upgraded my arch a few days ago to 3.6.9, and now I'll have to do it
again.

~~~
Adaptive
You can always blacklist a package (such as linux) in /etc/pacman.conf to
avoid upgrading it during pacman -Syu, for example. I had to do this during a
power regression in the kernel. However this isn't a long term arch strategy.
Note also that arch has LTS kernels, should you prefer.

~~~
allerratio
Not updating a package is the best way preventing security flaws to get fixed

~~~
jvm
If not upgrading the kernel every week or two is a security hole, most people
are pretty screwed. I personally do not like to restart that often. Turning
off automatic updates just gives you control of when to upgrade.

------
jcastro
Has anyone used this or a prerelease with the btrfs fsync improvements? I'm
interested to see if dpkg performance is usable now.

------
elux
Kernel newbies is down:
<http://www.isup.me/http://kernelnewbies.org/Linux_3.7>

Cached version: [http://webcache.googleusercontent.com/search?q=cache:vjkg-
vG...](http://webcache.googleusercontent.com/search?q=cache:vjkg-
vG3p8QJ:kernelnewbies.org/Linux_3.7+&cd=1&hl=en&ct=clnk)

------
jug6ernaut
ARM x64, wow. Is this really necessary? Or just a prelude to some new
applications?

When i think of ARM i think of Mobile Phones, total ram isn't an issue(yet?)
Are there other advantages?

~~~
binarycrusader
Better performance, and yes, it won't be long now before mobile devices ship
with more than 4GB of memory.

Don't forget AMD's announcement about ARM64 based servers.

~~~
mbell
ARM64 isn't required for > 4GB physical memory. The cortex-a15 can already
address up to 1TB and is already shipping. Only the per process limit is
locked to 4GB on a 32 bit core.

~~~
binarycrusader
I think you're splitting hairs. In fairness, I didn't explicitly call that
out, but as soon as you have more than 4GB of memory, there's going to be an
application that wants to use more than 4GB. So yes, 64-bit is necessary in my
opinion. Especially when you consider the server market.

~~~
mbell
> but as soon as you have more than 4GB of memory, there's going to be an
> application that wants to use more than 4GB

Maybe, but my desktop has 32GB of ram, has a 64bit CPU, and I've rarely seen a
single process use 4GB of memory unless it was leaking memory. My laptop has
8GB of ram, also 64bit, and I'm pretty sure I've never had a single process
use 4GB. The only exception I can think of was doing some silly data
manipulation, e.g. doing the initial build of a property graph of the entire
Boston area transit system. I bet some high end games or video/photo editing
software would use more than 4GB. Point is, those are all pretty niche
situations, I'd bet the majority of memory usage on an average person's
computer comes from browser processes, few hundred MB each if that and office
applications, also a few hundred MB. Those processes add up though, so having
more ram is usually a really good thing even if no processes uses even close
to 4GB.

I also challenge the need in server loads. I'd bet the vast majority of
applications never use 4GB either on the app server or the database server.
Most of what gets discussed here on HN are large high scalability applications
that you wouldn't host on ARM cores anyway. We often forget that the vast
majority of website are tiny and low traffic.

~~~
noselasd
Datacenters have a lot of stuff running in them. Big servers and databases
arn't just for public facing large scale applications.

Database servers in particular wants as much memory as possible, and it
doesn't take that big of a business support application that have a working
set of data > 4Gb.

Same goes for a lot of the memory hungry Java application servers that sits
around in a lot of businesses.

~~~
mbell
Apparently we're crossing wires here. I'm in no way shape or form saying these
applications don't exist (I'm currently writing one).

What I am saying is that for ever such application there are probably more
than a thousand sites hosted on free with your domain or $5 a month php +
mysql hosting that use no where near 4GB per process. For the companies
hosting those sites a cortex a-15 with a ton of ram is an awesome solution.
This isn't all or nothing, we're talking about two different markets.

~~~
sliverstorm
But how does "64bit is needed in specific applications that are not the
majority" translate to "64bit is not needed"?

~~~
mbell
It doesn't, and I don't see where I've made that claim. I'm refuting the
opposite. The initial claim was:

> I didn't explicitly call that out, but as soon as you have more than 4GB of
> memory, there's going to be an application that wants to use more than 4GB.
> So yes, 64-bit is necessary in my opinion. Especially when you consider the
> server market.

I'm just saying I don't agree. There is a massive market very open to the
power saving offered by arm that have no use for > 4GB process spaces. That
doesn't mean there aren't markets where 64bit will be useful.

~~~
sliverstorm
I think we are on different pages with the word "necessary". I am using it as
"needs to exist as an option", and I think you're using it as, "needs to be
used by everyone".

------
polarrat
Sorry for being such a noob. But could someone please tell me the difference
between "linux" and other linux builds like Ubuntu, RedHat etc?

~~~
rkalla
If you've never been in the Linux world, can certainly see how this is
confusing.

This announcement is for the Linux kernel. The different Linux distributions
(Ubuntu, Redhat, etc.) all bundle different versions of the Linux kernel and
umpteen number of packages around that to create a cohesive Desktop or Server
experience.

The kernel is the one core, similar piece between ALL of the distros. It is
the desktops, package managers, etc. that differs between the distros.

------
sdafdasdfasdf
from "TCP Fast Open: expediting web services":
<http://lwn.net/Articles/508865/>

> Furthermore, the server should periodically change the encryption key used
> to generate the TFO cookies, so as to prevent attackers harvesting many
> cookies over time to use in a coordinated attack against the server.

What is going to do this? I hope this is built-in somehow.

~~~
marshray
It looks like the key will be accessible via the proc filesystem. But it's
anyone's guess how many distros will faithfully schedule a cron job to rotate
the key.

EDIT: Looks like the key is chosen at kernel module "late init" time. I think
this is before any init scripts have had the opportunity to add back any
entropy persisted from previous boots. So the entropy in the kernel pool is
minimal. It may be plausible for a remote attacker to guess the key for a
bunch of servers.

Also, if the key is not rotated by cron, it provides a single-packet method
for a remote attacker to observe that a server has been rebooted since he last
checked. This will give a good indication of how often security patches have
been applied.

[http://git.kernel.org/linus/1046716368979dee857a2b8a91c4a883...](http://git.kernel.org/linus/1046716368979dee857a2b8a91c4a8833f21b9cb)
[http://git.kernel.org/linus/168a8f58059a22feb9e9a2dcc1b8053d...](http://git.kernel.org/linus/168a8f58059a22feb9e9a2dcc1b8053dbbbc12ef)
[http://git.kernel.org/linus/8336886f786fdacbc19b719c1f7ea91e...](http://git.kernel.org/linus/8336886f786fdacbc19b719c1f7ea91eb70706d4)

~~~
aleyan
Might be better to have the keys rotated after a certain number of TFO cookies
are generated rather than on a time-based schedule. This will prevent
attackers from trying to make a huge number of requests in a set period of
time.

~~~
marshray
The TFO cookie is only generated once per client "source IP" and is good until
the key is changed on the server. (Scare quotes because at the source IP may
be spoofed).

For an attacker to learn a cookie that's valid for a given victim "source IP",
he only needs to be passive observer somewhere along the route. Even if we
believe that's very hard in most cases, if it's possible at all, he has the
mother-of-all anonymous reflected DoS amplifiers.
[http://tools.ietf.org/html/draft-ietf-tcpm-
fastopen-02#secti...](http://tools.ietf.org/html/draft-ietf-tcpm-
fastopen-02#section-6.2)

So, yeah, using a key that's rotated after a short amount of time -or- number
of uses (whichever comes first) seems like a good idea.

~~~
santaragolabs
It's very easy to do in NAT'ed environments and the Linux kernel doesn't
implement the suggestion of the RFC draft to include timestamps too.

An attacker who doesn't want to do a MITM attack because that might be noticed
can set up sessions to all kinds of servers outside the NAT which support TFO.
Then all these TFO cookies are used in spoofed SYN packets with the source IP
being set to the host behind the NAT that the attacker wants to flood. Easy
enough.

~~~
marshray
Yep.

Of course, some will argue that if the attacker is inside your NAT, you're
already pwned.

I don't think that's a very good principle for the security design of internet
protocols.

