
Linus Torvalds: “Do No Harm” - ekianjo
https://lkml.org/lkml/2017/11/21/356
======
zx2c4
I wrote the email that prompted this quite civil response. I'm very pleased
with the outcome, because I think this clear statement of his position is a
lot more useful for people to work with, rather than just assuming Linus hates
security or something.

I interpreted his response in practical terms as essentially being the
following. Patch set merge 1 has "report" as default and "kill" as a non-
default option. Patch set merge 2 has "kill" as default and "report" as a non-
default option. Patch set merge 3 removes support for "report". This way we
have the best of both worlds: we eventually reach the thing that actually adds
real security benefit, which makes security folks happy. And we don't break
everybody's computers immediately, allowing time for the more obvious bugs to
surface via "report", which makes users and developers happy. Seems like a
reasonable process to me.

~~~
zaarn
Though, I think that the time between PSM1 and PSM2 will be significant.
Usually default options are changed once basically all distros compile with
another option without widespread breakage. And once no LTS kernel with PSM1
is supported, you merge PSM3.

Might take years but atleast the airplanes keep flying instead of crashing
their computers and consequently themselves.

~~~
zx2c4
> Though, I think that the time between PSM1 and PSM2 will be significant.

Indeed you're probably right there. Fortunately security-focused distributions
and individuals would be able to change the defaults in the interim.

------
lima
Background: the "kernel self protection project" (KSSP) recently upstreamed
the Grsecurity/PAX reference counting implementation which prevents a certain
class of security bugs from being exploited.

Grsecurity is a security hardening patchset for Linux that makes _deliberate_
trade-offs in favor of security, sacrificing availability if necessary. This,
aside from the political issue, is the main reasons why it's hard to upstream
it. Linus has called some of their mitigations "insane" before precisely for
that reason. Grsecurity will rather terminate userland programs or, in some
rare cases, panic the kernel if it finds itself in an undefined state. This is
exactly what you want if you care about security, but it's not a trade-off
everyone is happy with (including Linus).

Unfortunately, Grsecurity/PAX is not (and probably won't ever be) involved in
the KSSP project, and the KSSP developers do not understand the code nearly as
well as the Grsecurity team does. This lead to a situation where the new code
caused a crash that they weren't able to fix in time, so they disabled the
feature in the last minute.

I've been using Grsecurity for years until they stopped making it publicly
available, and I remember many bugs that were uncovered by PAX_REFCOUNT and
yes, occasionally panicked the kernel where a vanilla kernel would run just
fine. They usually found and fixed those within hours.

Grsecurity/PAX have invented many of the modern exploitation mitigations,
probably second to none. Some have even been implemented in hardware. Their
expertise in building modern defenses is astonishing (their latest invention,
the control flow integrity mechanism RAP, is a work of art).

Linux could be the most secure kernel, instead, it's fallen way behind Windows
- which has much better defenses than Linux nowadays thanks to Microsoft's
ongoing battle with rootkit writers. Go figure.

If the large companies who use Linux _really_ want to improve kernel security,
they need to work with Grsecurity and not against them. It's beyond me how
this isn't happening already.

~~~
amelius
> Grsecurity will rather terminate userland programs or, in some rare cases,
> panic the kernel if it finds itself in an undefined state. This is exactly
> what you want if you care about security, but it's not a trade-off everyone
> is happy with (including Linus).

I'd also like my kernel to halt whenever an assertion does not hold, for the
sake of keeping my sanity; not just for security.

Why would you _not_ want this?

~~~
drinchev
Linux is used in so many critical systems.

What happens when a security bug stops the ventilating machine of a person
lying in hospital bed, or halts the screen of a surgeon.

Not to mention voting machines, ISP's, telecoms.

For me having all those stopped, when properly exploited, looks more like a
very scary DoS attack vector.

Imagine a security f*ck up, like Heartbleed, but this time with an option to
halt kernels / systems.

~~~
exikyut
Hooold it. Some of those things are not like the others.

\--

I pity the engineers working on ventilation machines and the like. Medical
devices are insanely hard to get right; that's neck and neck with aviation
testing. I'm reminded of SQLite3's "aviation-grade" TH3 testsuite, which
apparently has 100% code coverage. Let's be honest; Linux's monolithic design
can't really attain that.

I would never use Linux for a medical device. I say this as someone who just
happens to only be running Linux on every machine in the house right now (and
I have for years, it's just how things have worked out, it's not at all novel
or whatever, my point is that I'm totally comfortable with it). I'd use L4 or
something instead. In a pinch I'd use a commercial kernel with tons of
testing. Maybe I'd even use Minix; I'm quite sure a lot of people in industry
are seriously looking at it now Intel have pretty much unofficially greenlit
it as a good kernel (lmao).

\--

Voting machines, on the other hand; I'd totally use Linux for that, because
the security/usage model is worlds apart. Here, I WOULD ABSOLUTELY LIKE FOR
THE TINIEST GLITCH TO CRASH THE MACHINE, because that glitch could be malware
trying to get in.

The user experience of a voting machine is such that you walk up to it,
identify yourself, and push a button. Worst case scenario in this situation is
that you do some involved process to ID yourself and then the unit locks up,
so you have to redo the ID effort on another unit. That is, for all use cases,
not going to be a problem.

(I think that's the first time I've used all caps in years!)

\--

Telecom systems... those are also a totally different world. See also: Erlang.
In this situation you would likely want a vulnerability to literally sound a
klaxon on a wall, but have the system still keep going.

I'm reminded here of an incident where a country's national 3G system was
compromised (not the US, somewhere else) by hackers and the firmware of the
backend systems was hot-patched (think replacing running binary code - the OS
allowed it, it was REALLY hard to even notice this was happening) to
exfiltrate SMS messages and cause calls to certain numbers to generate a
shadow call (which ignored mic input) to an attacker-controlled number as
well.

Telecoms is a classic case of massive scale; nowadays a single telecom switch
might be routing thousands of calls through at a time. Yeah you don't want
even a single machine to go down. But you DO want VERY thorough debugging,
auditing and metrics.

(Which apparently don't exist.)

\--

As for a Heartbleed-esque catastrophe, apparently one is going to be announced
for Intel ME at the upcoming Blackhat(?) conference in December. I can't wait
to hear about it myself.

~~~
hpaavola
Many medical devices run Linux. Most (AFAIK) patient monitors run Linux; GE
and Philips (the biggest is business) both run on Linux. Those are the devices
that keep you alive during surgery, make sure that those who are born too
early (I don't know the English term here) are doing ok, monitor you state
while you are in ambulance etc.

~~~
TickleSteve
No...

Many medical devices run Linux as a _User-Interface_... (or Windows for that
matter).

The actual safety-critical portion of these systems is rarely running Linux,
but rather on a bare-metal micro.

~~~
exikyut
That makes a lot of sense.

I'm reminded of a UAV doing the same thing. It ran L4 for low-level control,
realtime scheduling, and security, and then virtualized Linux on top of that.

Sounds unbelievably clunky on the surface, then you realize it's a remarkably
useful way to abstract everything cleanly.

------
sgentle
It's interesting to see this laser focus on a particular kind of user. If
you're running Linux on a server, you're a user, but unless you're very
irresponsible you would probably rather your programs crash than give away
private information. Your interface is to a cluster of machines where
individual crashes are probably not that big a deal.

If you're running Linux via Android, you're a user, but mostly you're a user
of actively developed apps on top of an actively developed OS, usually pegged
to specific kernel versions. Your interface is to that layer on top, and given
that its code is written by app developers and hardware vendors who will ship
anything that doesn't crash, you probably want security bugs to crash.

It seems to me that the kind of user Linus means when he talks about "the new
kernel didn't work for me" is a user of Linux without any substantial layers
on top, where kernel updates happen more often than userland software updates,
and where individual crashes have a significant impact. In other words, users
of desktop Linux.

But I wonder if that focus on desktop Linux really reflects the majority of
users. And, if not, perhaps it might make sense to have "hardening the Linux
kernel" as the first step if it makes "raise the standard for the layers built
on top" the endpoint.

~~~
mrsernine
>you would probably rather your programs crash than give away private
information

Crashing on a security issue is a good thing for every kind of user. Crashing
on a latent bug that _COULD_ be exploited (maybe not possible at all) is a
totally not desirable situation. The problem here is that hardening methods
lack the ability to make that distinction.

~~~
titzer
> Crashing on a latent bug that COULD be exploited (maybe not possible at all)
> is a totally not desirable situation.

How do you square this with the reality that "keep on truckin" is generally
the path from bugs to security exploits, and has been shown to be over and
over in the wild?

~~~
mrsernine
Bugs will happen, that's a natural law of computer science, if you keep on
trucking over them you will be delivering buggy software that is likely to
cause problems. Even if you chase them down and correct them all, your
software is still going to have bugs, that's a fact of life.

Should code containing bugs be allowed to run? If the answer is no we must ask
ourselves how much software we have today that is completely bug free (that
will be 0%).

I still think these proactive approaches are good to disclose possible
exploits, but killing processes just because they might be exploitable is a
very long shot.

------
Santosh83
Very pragmatic. He sees software in the overall context of getting a job done
with a computer, imperfect though it may be, instead of dying because it was
not perfect.

Unlike a segfault from a user space program that indeed merits a 'kill', the
kernel should strive at all costs to keep running, since kernel panics are so
much more inconvenient.

~~~
theWatcher37
Absolutely not.

If “do no harm” is a principle, then the kernel should ensure that no harm is
taking place.

If flaws within the kernel allow _harm_ to occur while otherwise normal
transactions are occurring then it is absolutely preferable to panic and shut
down over allowing that potential harm to occur.

To suggest otherwise, that detected errors that allow harm should be allowed,
is pure insanity.

Linus is unquestionably wrong in the regaurd.

~~~
anameaname
A thought experiment that comes up in Kernel design classes is what should
happen if the OS was running the flight-control software for an Airplane you
are on? If there was a bug in the kernel, perhaps a double free or a memory
leak, what should happen?

A panic would result in the airplane falling to certain doom. But if it were
to keep running, it may be a security vulnerability. Being absolutist in
either direction of the discussion will lead to absurd scenarios where you
would make the wrong decision.

~~~
Nokinside
> double free or a memory leak, what should happen

Both offensive and defensive programming is important in safety critical
programs and I get your point, but those things you mention don't' happen in
safety critical systems.

There is no dynamic memory allocation. RTOS used will support "brick wall
partitioning" for memory, processing and other resources. Different systems
can run in the same OS but they cant' compete for processing time, locks or
memory access. Everyone has been dealt the resources they can have from the
start. It's not possible to run out file descriptors, memory if you allocate
them statically from the start.

Assertion errors or monitoring errors in safety critical systems usually cause
reset or change into backup system. If the program state is large and reset is
not safe, retreating to some earlier state (constant backups) is likely.

~~~
Annatar
_but those things you mention don 't' happen in safety critical systems._

Errors in logic happen everywhere.

~~~
Nokinside
_those_

dynamic memory allocation errors don't happen when there is no dynamic memory
allocation.

------
carlsborg
"without users, your program is pointless, and all the development work you've
done over decades is pointless.

.. and (then) security is pointless too, in the end."

He tends to get really mad when kernels dev inconvenience user space devs.
Perhaps one of the reasons Linux succeeded was because of this fanatical
customer focus - if linux is the platform, user space developers are the
customers.

~~~
babarock
This.

Not breaking user space is fundamental to the success of most operating
systems today. I read this article I cannot seem to find about Microsoft
employees spending months replicating "wrong" behavior in <oldwindows..
win95?> so that applications still ran on <newerwindows... win98? win2000?>.
Users don't want to see their applications crashing.

I wish I could find this article again, it's very relevant to your comment.

~~~
aurelianito
It reminds me of the hack Microsoft did to make simcity work in Windows 95.
[https://news.ycombinator.com/item?id=2281932](https://news.ycombinator.com/item?id=2281932)

------
kyberias
I think the earlier message drives the points home in more familiar Linus
style:

[https://lkml.org/lkml/2017/11/17/767](https://lkml.org/lkml/2017/11/17/767)

"Some security people have scoffed at me when I say that security problems are
primarily "just bugs". Those security people are f*cking morons."

Gotta love the guy. :)

~~~
smnrchrds
The person at the other end of the conversation would disagree with this
sentiment:

"Thanks. Still, I'd prefer Linus yell at me than other folks trying to do
similar work. If I can shield anyone from this abuse, then maybe they won't
give up on kernel security development. Digging Linus's actionable feedback
out of the ad-hominem attack can be challenging." [1]

[1]
[https://twitter.com/kees_cook/status/932694978366619648](https://twitter.com/kees_cook/status/932694978366619648)

~~~
kadenshep
>Digging Linus's actionable feedback out of the ad-hominem attack can be
challenging.

They're not really that separate. He's being totally disingenuous and still
letting his own fragile ego get involved.

~~~
kyberias
> He's being totally disingenuous and still letting his own fragile ego get
> involved.

Who is?

~~~
kadenshep
The person complaining.

------
alanfranzoni
That's a consequence of an "old" issue in the IT security field - security
researchers and developers sit at opposite sides of the table, they've got
different concerns and agendas.

Pick some security researchers; now tell them to build any nontrivial piece of
software; I doubt they'd be able to do it, and if they succeed their software
will be full of bugs, including security ones.

Security is part of the correctness and of proper building of the software, so
it should be integrated into software development. Security experts can (and
should) still exist, but the current state, where the infosec people appear to
rule, is pointless - exactly because the same infosec people wouldn't be able
to deliver better software than current developers.

I highly regard somebody _who can write a software without security bugs_ ; I
don't regard as highly somebody who shows me the bugs, but would be unable to
write that software at all.

Let's turn the infosec objective: not to undiscover security bugs, but to
write software without security bugs. Then we're at the same side of the
table.

~~~
lima
That's not a fair (or useful) assessment. Obviously, the narrow-minded
security people you describe exist, but they're a minority. Many security
people are developers who specialized in security, and are very much capable
of building software.

The kernel code is question is exactly what you ask for - instead of finding
and fixing single bugs, it's a mitigation that prevents _all_ occurrences of a
particular class of bugs.

~~~
watwut
When I was looking to learn about how to systematically make secure software,
I did not found all that great actionable information. There is a lot about
particular hacks and vulnerabilities, lists of popular vulnerabilities
categories etc. There are was one book dealing with architecture and such I
found. Development quite clearly is not focus of security research.

A lot of advice, especially that found on blogs, was literally naive and felt
like something written by someone who never even seen larger team working.

~~~
extrapickles
As a infosec guy who was a software developer, its non-trivial to write
actionable general security advice.

There is an entire academic field of study on making network related security
blunders hard (lang-sec). It generally boils down to do all your parsing in
one spot and a small set of features are evil.

What is really needed is a site where one can pick a bunch of features that
your software project has/wants and then it gives semi-tailored advice on what
to do, what to watch out for, or that you need to rethink things (eg: rolling
your own TLS implementation=world of hurt).

------
zaarn
>It's that the code has been RUN BY USERS for months. If it's been [...] in
grsecurity for five years [...] It only means that hardly anybody actually
ever ran it.

Subtle burn towards Grsec, I laughed a bit.

In all seriousness, I think Linus is somewhat on the right track. Security
Patches should foremostly not break anyone's workflow (maybe except the evil
haxor's workflow) and rather print a warning until the exact implications of a
full Terminator-mode patch is understood. Because people won't upgrade to
kernels that break their workflow and a warning in the kernel log is better
than a vulnerable kernel.

------
quink
I'm reminded of a recent security fix for IE11 that Microsoft pushed out
earlier this year:

[https://developer.microsoft.com/en-us/microsoft-
edge/platfor...](https://developer.microsoft.com/en-us/microsoft-
edge/platform/issues/12349663/)

It killed printing from iframes completely. Great that they solved the
security problem, whatever it may have been. They also broke a major piece of
browser functionality that a lot of enterprises rely on fundamentally. Hell,
even printing shipping labels from eBay was broken, and so was hilariously
Microsoft Dynamics 365. And our own product.

And the first response by Microsoft on this very link? "Won't Fix" and keep a
major bit of fundamental browser functionality - printing a document - not
working. "Do No Harm".

~~~
pjmlp
Because there is a workaround, applications just need to be updated.

"Either use Print Preview instead of print button or if currently using
window.print() in javascript change this to document.execCommand('print’,
false, null)"

~~~
AnIdiotOnTheNet
> Because there is a workaround, applications just need to be updated.

What actually happened: people reverted the patch. In the real world, you
can't expect timely or even correct response from vendors you rely on. It
sucks, but it's how it is.

~~~
torpcoms
Then after you get compromised in some way, you will start to look for better
vendors.

~~~
AnIdiotOnTheNet
Only if the cost*risk of a future compromise is > cost of replacing the
product. This is rarely going to be the case for business critical software.

------
throw2016
Some security folks tend to have tunnel vision. They are focused on their own
little patch and the first instinct is to control and shutdown with zero
concern for usability. job done. bye. But that's too self serving and lazy.

And if that's what they want fine, release your own app or distro and let
people who need/desire that level of security choose security over everything
else without trade-offs.

The worst thing is a kind of entryism and trying to impose yourself on people
operating under different constraints. That's why you always need a strong
manager to balance interests and this is the role Linus is playing. More often
that not with this kind of imposition something comes out of the blue, and you
are left squandering hours to fix things and get back up and running.

------
jbangert
A few points:

1) failing loudly is better than failing silently. A memory corruption issue
(or a bad refcount, etc.) is not a benign issue that only becomes relevant
under carefully crafted exploit conditions. You need the carefully crafted
exploit to get the system back into an attacker controlled state (I.e. code
execution); by itself (with non-malicious inputs, usually something random or
slightly atypical — enough to not have been noticed yet, but typical enough
that some program does it) the system is likely to either panic immediately
(same result as with pax) or to corrupt some memory, in which case you will
have a lot of strange behaviour to track down later (users will probably blame
them on hardware or on their user space, so you might never see them. for
example a recent OSDI paper showed that ext3/4 had several real world data
corruption bugs. If these aren’t as frequent as the recent bcache issues, no
one notices).

2) When I was doing research projects (into memory defenses on the kernel)
about 3 years ago, there was no (commonly used, that I saw) automated testing
infrastructure in the kernel. This makes catching regressions, especially in
drivers for rare hardware, hard to catch. While tests aren’t a panacea, i
think Linux overestimates what fraction of problems Code reviews will catch.

3) the “don’t break user space” strategy is already failing. Every mainstream
distribution and embedded vendor stays on an old kernel branch. Big
deployments do staged rollouts and extensive burn in tests. This isn’t just
because the kernel, but because of extensive abreaking changes everywhere
(compilers, standard libraries, etc. all need to change sometimes).the last
time this happened, IIRC it was some audio bug in a strange configuration. In
my experience, running a non standard Linux audio confit causes countless
breakages, so an additional one in the kernel that might save my personal data
from being exfiltrated is worth it. Most users have average (and therefore
well tested) setups, which means thy won’t see breakages as often.

Perfect software doesn’t exist, and even MSFT backed off maintaining religious
backwards compatibility (note that Microsoft’s approach was not to flame at
developers and hinder new development, but through extensively building
compatibility shims. Often, these came with trade offs strongly in favours or
security, e.g. UAC).

Breaking user space is ok; users already expect breakage, and the cost of the
additional breakages is low (to users and to society as a whole) compared to
the cost of security breaches [citation needed, but Linux kernel security is
relied on in a lot of places].

~~~
blaisio
So one of Linus' main points in this series of posts is that failing loudly is
actually not always better than failing silently or quietly, and it's really
annoying when people come in making that assumption without thinking. This is
also something that he is constantly repeating and ranting about, and it's
arguably one of the reasons why Linux is so successful.

Think about a smartphone - do most users want it to crash and reboot, even if
some error (which could end up being a security issue) occurred? The answer is
no, absolutely not. The crashing and rebooting itself isn't really that
helpful. Reporting the bug to the Linux developers _would_ be helpful.

Some people do want the frequent crashing behavior and that's okay, but it's
not okay to make that decision for everyone.

Also, users might expect minor breakage if someone somewhere makes a mistake,
but that doesn't mean it's okay. That's like saying if someone always washes
their hands before eating, it's okay if they get sick, because they were
expecting that they might get sick.

~~~
pjmlp
Interesting that you mention smartphones, because that is exactly what Google
has made to their Linux fork.

Every Android app that misbehaves, just gets killed without warning.

The scenarios where this might happen, have been increasing since Android 7.

------
chris_wot
So tl;dr - fixing a security bug is rarely the end of the story, fixing the
root cause is far more important. And don’t piss off the users.

~~~
_nalply
If your security patch kills users' buggy processes or even crash their
systems then you are a «bad security person». Please report the bad access
first so users and developers of their software have time to fix the bug.
Upgrades disabling users' software are a big no-no. After all security is
meaningless for a non-working system.

~~~
theWatcher37
I’m so glad that backwards thinking concepts like this are dominant, otherwise
we might actually have secure software!

Think about it, an open-source OS is choosing backwards compatibility over
security. This would have caused quite the stir in the 90’s Linux community.

~~~
_nalply
Security is meaningless if your system doesn't work. Makes sense for me. Or
your car is forbidden from starting after an upgrade because you might run
over someone tonight. It's a trade-off.

------
unixhero
This is the most insightful Linus writeup yet. His others are good too, but
this just hits the spot. Great!

Funny note, this post could have been a textbook sort of material. At the end
he even says please. The only thing that breaks it is the reference to
touching oneself :)

------
SAI_Peregrinus
No security person is likely to argue that DOS attacks aren't a security
issue. They get CVEs all the time! A patch that denies the user the ability to
use the service introduces a security issue. Bad usability IS a DOS attack.

------
kseifried
With respect to
[https://lkml.org/lkml/2017/11/21/356](https://lkml.org/lkml/2017/11/21/356)
and this thread I just want to get #CVEs on all the things so we can figure
out the scope of the problem, and then look at what we need to do to "fix"
"it" (assuming "it" is a real problem, dunno without data).

------
margorczynski
Quick question - doesn't Linux driver model (they run in kernel-space) create
a giant attack vector because of that?

~~~
mschuster91
Lots of drivers on Windows, OS X and Linux run in kernel space simply because
kernel-to-user-and-back context switches are expensive and so kill
performance.

I believe the exceptions are printer and scanner drivers (these run in user-
space CUPS in OS X/Linux), some filesystem drivers (basically, FUSE-backed)
and cheap-ish USB drivers.

~~~
margorczynski
The logic behind why it is done like that I get. Just wondering as You said is
it possible to push at least the most bug-prone and exploitable ones to user-
space

~~~
Santosh83
I don't see how you can convert a kernel-space driver to a user-space one
without significant rewriting, and in some cases it may not be possible at
all.

~~~
margorczynski
What about some abstraction/interfacing layer/driver that would take care of
exposing some kernel functionality an average driver needs and provide
additional validation?

~~~
zaarn
Drivers need to do things that are inherently unsafe.

The driver responsible for you harddrive needs to instruct the SATA controller
to copy a piece of data from disk to a specified memory location.

The kernel has no understanding of the process without the driver and is
therefore incapable of preventing abuse.

You can _somewhat_ prevent this using various methods but those cost
performance.

And keep in mind the abstraction itself already costs performance and this
doesn't even allow easy extending of the abstraction if necessary.

While microkernels that do run everything in userspace are nice in theory, you
usually pay in CPU cycles compared to kernel-mode drivers.

------
kauegimenes
Body for this message unavailable

Cache:
[https://web.archive.org/web/20171122005241/https://lkml.org/...](https://web.archive.org/web/20171122005241/https://lkml.org/lkml/2017/11/21/356)

------
torpcoms
Is there no way to make this an argument at boot time?

Something like, `ksecerr=log+kill`

------
dang
Related to this from a couple days ago:
[https://news.ycombinator.com/item?id=15738392](https://news.ycombinator.com/item?id=15738392)

------
Shorel
I wish Wayland developers would share Linus' mentality.

Right now it restarts once every 36 hours in average, killing all my windows
and programs.

X11 didn't behave as bad.

------
luckydude
Wow, Linus still has it. That was an _excellent_ email. Go Linus, that's some
full on management chops on display.

------
Annatar
_Because the primary focus should be "debugging". The primary focus should be
"let's make sure the kernel released in a year is better than the one released
today"._

Is he starting to sound like illumos engineers or what? Better late than
never, but it took him long enough.

------
KasianFranks
A dumpster is not a bug, it's a social issue -
[http://www.mit.edu/hacker/hacker.html](http://www.mit.edu/hacker/hacker.html)

------
mtgx
Linus: Do no harm! Except against security people. I'd like to harm them _a
lot_!

------
lifeisstillgood
do you want to know why your (random big company) does not do software
development that well. when was the last time you saw an email like that from
the chairman of the board to all employees? with Please in it. and long winded
explanations?

companies need to chnage

~~~
corpMaverick
This is what I was thinking when I was reading. Linus has been thinking
carefully about this for a long time. He has taken this piece of software and
he has nurtured it for years. He has protected it. He made sure that it is
coherent and maintainable. He has put forward as set of guiding principles.

I haven't seen this in the companies that I have worked for, middle and upper
management don't have any insight over the products they are building. They
just care about dates and getting projects done. There are not insights. There
is no long term thinking. There is no awareness of the technical debt that is
building up. Their only solution is to throw more money (and people) to the
projects.

~~~
ksk
I 100% agree with Linus here, buuuut Linus doesn't have to make money for his
company, and isn't personally responsible for the salaries of people in the
company. He doesn't have to sit in meetings and explain why sales are down 10%
this quarter, or keep assuring investors that their money is safe. Those
upper-management people do solve problems, its just that they're solving
problems in an environment that isn't based on logic, and is fundamentally
unfair.

------
fb03
The first thing I did when starting to read Linus' answer was to Ctrl-F for
badwords. I am really happy he's blossomed from that teenage angst colorful
name-calling era.

Way to go, man!

