
“DBus is seriously screwed up” - deng
http://thread.gmane.org/gmane.linux.kernel/1930358/focus=1939166
======
outworlder
This whole pulling crap out of the Linux kernel mailing list is doing the
industry a disservice.

Quotes are taken out of context, used by people who lack the required
technical understanding, which then reach the ears of managers, who will
promptly act on overblown concerns. All of it somehow backed by Linus' word,
even when he doesn't actually mean it.

I've started a stopwatch to see how long it will take for some of my former
coworkers to contact me with concerns about DBus. We used it pretty heavily.

As for DBus itself: guys, it is useful. It works. You can count on it being
there and working in most distributions, you can talk to most things nowadays.
Its easy to use (for most use cases), language bindings are everywhere.

So, turns out is slow. It's so cool that we have identified that. Let's fix
it.

Even if the slowness doesn't seem to bother many of its use cases.

~~~
npsimons
So totally this; it's what I've said time and again. Is Linus abrasive? Yes,
he can be. He usually has reason. But people seem to rarely read the whole
thread, and even if they did, they are usually outsiders who lack both the
technical chops to make valid criticisms, and they aren't familiar with Linus'
management style. I for one count myself as one of the former as I've been
outside of kernel land for quite some time. I'll say this though: the N900,
with 256MB of RAM and a single core 600MHz ARM chip from 2009 had DBus, and it
didn't seem to suffer too much from it. Hell, it's kind of impressive to be
able to make phone calls from the command line, say over SSH:

[https://wiki.maemo.org/Phone_control#Make_a_phone_call](https://wiki.maemo.org/Phone_control#Make_a_phone_call)

Can improvements be made? Probably. Should they be? Why the hell not? Should
clickbait headlines stop distracting us from real work? I only wish they
would.

~~~
castell
Strigi desktop search and NEPOMUK semantic desktop extensively relied on
D-Bus. The D-Bus background daemon often hung and NEPOMUK itself was known to
be slow and wasteful on hardware resources. Maybe it wasn't D-Bus fault at
all, but the years long problems lead to a bad impression.
[http://en.wikipedia.org/wiki/Strigi](http://en.wikipedia.org/wiki/Strigi) ,
[http://en.wikipedia.org/wiki/NEPOMUK_(framework)](http://en.wikipedia.org/wiki/NEPOMUK_\(framework\))

Comparing a PPC (WinCE), iPhone1 and N900 with the similar hardware specs -
there was no clear winner.

------
lmilcin
It blows my mind how such a simple operation as passing messages from process
to process can baloon to waste the measured half a million CPU cycles. People
manage to have full-blown HTTP servers service a request with less than that.

Heck, I have worked on an algorithmic trading platform that in the limit of
5us receives market data, dedups it (multiple multicast streams for
redundancy), uncompresses it (fricking zlib), parses it, analyzes it, sends to
multiple algorithms which decide if current market situation matches certain
rules, decides and fills market order, the order gets inspected by independent
mechanism to stop the algorithm if it malfunctions, and only then it gets sent
to market, over TCP, which is another form of IPC.

All in the span of fricking 5us which is _40_ times less than the benchmark
suggests for this simple task. Granted, the algorithmic trading world goes to
great lengths to avoid overhead including kernel overhead any kind of task
switiching, branch prediction fails, etc. But still, come on, guys...

~~~
the_mitsuhiko
> It blows my mind how such a simple operation as passing messages from
> process to process can baloon to waste the measured half a million CPU
> cycles. People manage to have full-blown HTTP servers service a request with
> less than that.

You're comparing a lightweight thing with a security policy driven message
bus. Obviously the latter is going to be more expensive.

~~~
huhtenberg
Nothing obvious about it.

If you look at the Linus's trace, it's all heap and mutex operations. That's
just sloppy internal design full of concurrency bottlenecks and lots of in-
memory cloning. You certainly don't mean to imply that both are the only way
to implement a "security policy driven message bus" software, do you?

~~~
the_mitsuhiko
> If you look at the Linus's trace, it's all heap and mutex operations. That's
> just sloppy internal design full of concurrency bottlenecks and lots of in-
> memory cloning.

Far from it. One of the core designs of kbus (which dbus cannot do because
it's not in the kernel) is that you can seal the payload buffer from the
sender so the receiver and use it safely concurrently without having to clone
anything.

There is obviously a lot of general inefficiency in the userland libraries but
not in the design of it.

~~~
vezzy-fnord
You're talking about memfd, I think? That has nothing to do with kdbus in
particular. It's an independent syscall that replaces many use cases for
splice/vmsplice, even though it was introduced as part of the kdbus project -
it's still a separate thing.

~~~
the_mitsuhiko
> You're talking about memfd, I think? That has nothing to do with kdbus in
> particular. It's an independent syscall that replaces many use cases for
> splice/vmsplice, even though it was introduced as part of the kdbus project
> - it's still a separate thing.

That was created for KDBUS but has more use than than being used for KDBUS
exclusively. It still was written for KDBUS.

~~~
vezzy-fnord
And so it was accepted and merged. That something good came out of kdbus is
excellent, but that's no justification to merge in the whole package. A ton of
proposed kernel additions end up like this - a few good ideas refined and
accepted, and the rest thrown out.

------
userbinator
I can make two observations and hypotheses from the profiling results, which
agree with Linus' conclusion of "bad user-level code":

\- Memory allocation/deallocation are taking the most time.

\- All the percentages for each function are very small.

The former is a characteristic of code which heavily abuses dynamic
allocation. It's surprising to see how many programmers are not aware of the
overhead it adds and would malloc()/free() frivolously when something simpler
would suffice. This is also often accompanied by copious amounts of
unnecessarily copying data around. I've worked with small embedded systems
where every use of dynamic allocation would need to be justified thoroughly in
code reviews; perhaps these developers would benefit from being put through
the same process.

The latter is a phenomenon which arises from "excessive modularity": the
functionality of the system has been split into so many little pieces that the
time each function contributes to the overall total is tiny. Instead of seeing
an obvious "80% of the time is being spent here" that could easily be targeted
for optimisation, that 80% is scattered amongst several dozen functions each
taking 1-2% each. The bottleneck isn't concentrated in one area --- the whole
system is uniformly inefficient. It's extremely difficult to optimise a system
like this because nothing in particular stands out as being optimisable. I've
had to optimise some large Java applications that were like this, and the
solution was basically to remove most of the code and rewrite it to get rid of
many chains of indirection.

~~~
yoklov
Frequently the second problem is exacerbated by cache misses. Every operation
becomes (much!) slower, but nothing stands out.

Not a popular opinion here, but excessive modularity like that is practically
the definition of bad, hard to follow code though IMO. Ignoring performance
issues, everything happens somewhere else, and large changes now take several
times longer, since the changes are not local to a function. The code is
several times longer than it otherwise would be due to all the function
declarations...

Its a nightmare. It tends to be the kind of code you feel productive while
writing ("I'm cleaning up this 200 line function"), but is really just making
the codebase worse (is there a general term for this kind of false
productivity? It's a common problem I see).

~~~
_pmf_
> Its a nightmare. It tends to be the kind of code you feel productive while
> writing ("I'm cleaning up this 200 line function"), but is really just
> making the codebase worse (is there a general term for this kind of false
> productivity? It's a common problem I see).

This issue is pervasive when Desktop/Web developers try to improve embedded
software. I've achieved a thousandfold increase in performance by converting
an embedded data logger from using printf to using a dedicated formatting
function (most of the time was spent on parsing the format string and
performing allocations).

~~~
TheLoneWolfling
The compiler doesn't do the parsing for printf at compile time for the common
cases? That's semi-surprising.

~~~
TD-Linux
In fact it does.

[http://stackoverflow.com/questions/19499618/gcc-printf-
optim...](http://stackoverflow.com/questions/19499618/gcc-printf-optimization)

~~~
detaro
Or rather: compilers exist that do. The embedded world has a lot of strange
toolchains

------
supertruth
Linus is just being Linus. He's brusque, non-chalant. He's inflammatory to get
people to listen. His thesis here is that bad DBus performance isn't due to
context-switching overhead or buffer copies (which can be solved by moving the
daemon into the kernel), but instead it's due to malloc-intensive /
utf8-parsing-intensive marshalling.

Secondly, he's saying that if performance is being used as an argument for
kdbus, then that's an invalid argument.

He's totally right by the way. In this pure message-passing benchmark, where
the message-passing overhead is the majority of the work, the slowness is not
in kernel-scheduling/system-call/kernel-buffer-copies. People confused a
potential impossible-to-overcome bottleneck as the most relevant bottleneck.

But that doesn't mean there isn't a reason for kdbus. Kdbus allows for much
better authentication than UNIX sockets do (you can authenticate with pid,
pgid, uid, gid, kdbus token, etc.). Also it allows for message-passing
security policies to live in the kernel which is crucial for security
applications. The tangential performance benefits are nice too, even though
the bottleneck wasn't in the kernel to begin with.

~~~
voidz
Responses like yours are why I enjoy to visit Hacker News.

Sorry for going meta - I'm just a bit disappointed because your tone used to
be the normal thing on HN, even when there were just as many disagreements.
People were able to keep it friendly. Today not as much; I find HN has become
too mainstream and "reddity", and the tone in general too emotional and
aggressive, about who or what is right.

This wasn't unexpected, but still, it is good to see not everyone has moved
into that direction.

------
pjc50
Some form of notification publish/subscribe functionality is necessary in
modern Linux. But dbus isn't very UNIX-y. It's also strangely opaque. With
pipelines, you can see how the plumbing works. With dbus, programs can
interact in nonobvious ways. Even worse is polkit, a poor replication of
Windows Group Policy.

The most unixy solution is buried at the bottom of this thread, from Plan 9:
[https://news.ycombinator.com/item?id=9450988](https://news.ycombinator.com/item?id=9450988)
; but if that's not popular, then I think what people actually want for
desktop purposes is an equivalent of the Windows PostMessage system.

~~~
XorNot
Plan 9 in no way _implies_ performance. Just because you can make it look like
a filesystem, doesn't mean it actually is going to achieve anything special
performance wise.

~~~
jff
And in fact the performance of Plan 9 tended to be extremely poor when
compared apples-to-apples with Linux.

It's just that everything was so simple it _felt_ fast, even if your
filesystem throughput was crap.

------
vidoc
It all started with a microbenchmark:

[http://thread.gmane.org/gmane.linux.kernel/1930358/focus=193...](http://thread.gmane.org/gmane.linux.kernel/1930358/focus=1937442)

DBUS implementation aside (which seems to leave a lot of room for
optimization), my initial reaction was that the gain was _rather_ minimal
compared to the claims of the systemd folks.

~~~
deng
That was my first reaction as well. But to be fair, the kdbus people always
said the real speedup will be seen for large packets.

However, the real issue is how slow overall DBus seems to be according to
those numbers. As Linus says earlier in the thread: "No way should it take 4+
seconds to send a 1000b message to back and forth 20k times." This is indeed
rather shocking.

~~~
fulafel
So what's the use case for sending large packets of data over dbus where the
performance gains warrant integrating the dbus server to the kernel? Are they
planning to migrate the X server to dbus?

~~~
deng
IIRC the use case is to stream media over DBus, which currently no one is
doing because it is too slow.

~~~
semi-extrinsic
So: if DBus was suddenly fast enough to stream media over, what applications
would be using this feature, for what sets of streaming endpoints?

~~~
scrollaway
KDBus was designed with sandboxed applications in mind. So use-case wise think
android-style "use the camera app to take a picture/video", where the camera
app transfers the media to another app over kdbus, thus without leaking
information about the system.

~~~
jschwartzi
If the point is to have Android style IPC, why not just pull a version of
Binder into the mainline kernel?

~~~
the_why_of_y
Android's Binder was already merged into the Linux kernel (not just "staging",
the real thing) since 3.19... half a year now.

[https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=777783e0abae3cab7555bb182776f9ffaa35631a)

But not many people are aware of that; since neither systemd nor Lennart
Poettering have anything to do with it, the usual armchair architects did not
paste their, uhm, well considered opinions about how much it sucks and is the
end of the UNIX philosophy all over the comment threads when it happened.

------
joelthelion
He's not saying DBus as a concept is bad, just that the implementation sucks.
If he's right, that's something that can be fixed relatively easily (compared
to a major architectural change).

~~~
TD-Linux
I agree, the OP title is a bit misleading.

I use dbus every day on my laptop and it works totally fine - I don't notice
its existence. I'm all for improvements, but I'm quite patient and quite happy
to wait this one out.

~~~
reidrac
I made a silly desktop app some time ago and it used DBUS to get notifications
from NetworkMonitor when the system went online/offline. Nothing too fancy,
very few lines of code.

When I was implementing that, I managed to get several segfaults from my
Python code. All together seemed a little bit fragile to me :(

That was 5 years ago, things are probably better now (to be fair, I don't know
what was causing the crashes; NM was 0.8 back then), but when I read Linus
comments I can help to think he's probably right.

This is an anecdote and all, but my point is that until I had to use it...
DBUS was pretty good and was working fine :)

~~~
belorn
Were you using threads? Python really should not crash, but combine it with
library imports implemented in c and threads, and you get really subtle race
conditions which result in segfaults. I have myself been force to debug why
Python segfaulted, and it was a standard library call which internally used
threading, and that caused a conflict with a c library that was not threading
safe.

~~~
reidrac
Sorry, it wasn't Python. It was a system component that crashed as consequence
of my Python code using DBUS.

------
castell
> Since then I'm convinced that people who are inventing RPC solve non-
> existing problems.

Many RPC solutions ultimately failed, because they were slow and
overdesigned/complex. (e.g. CORBA, Network OLE/DCOM, Java RMI, XML-RPC/SOAP
and other XML-based protocols, etc.)

Whereas e.g. REST (if you call it even RPC) is just very simple and enough for
most purposes.

It's more like the concept of Component Object Model failed. For example OLE
(the base technology behind DCOM) doesn't fly beside the legacy Office usage.
It goes without saying that binary based implementations of such Component
Object Model are faster than XML based ones - a fade of the last ten years.

The component format (if you call it that way) that just works is HTML5.

Edit: okay, it seems it's a controversial topic (My comment used examples
which were DCOM and OLE 1+2 (the original MS Office thing "Compound document")
-
[http://en.wikipedia.org/wiki/Object_Linking_and_Embedding](http://en.wikipedia.org/wiki/Object_Linking_and_Embedding)
and
[http://en.wikipedia.org/wiki/Compound_document](http://en.wikipedia.org/wiki/Compound_document)
, not COM directly)

~~~
roel_v
You do realize that pretty much whole of Windows is, underneath, based on COM,
including most of the .Net-exposed system API's?

I'll agree that COM and DCOM are... hairy, if you're implementing COM objects
'by hand'. But there's a reason for all of it, and I have yet to see an easier
_and_ more flexible way to write cross-language components. I could write a
COM object in C++, call it from VBScript in a very simple command line
program, as well as use it directly in a Web application in (here it comes!)
_1999_! That's _15_ years ago! And while COM is universally reviled now, and
none of the cool kids will want to be seen within 100 feet of it (actually,
the 'cool kids' don't even know what COM is any more, it's so 2005 to hate on
COM...), for those who spend 6 months on understanding it, it worked very
well, and has been supported for close to 20 years now (on Windows, that is).

~~~
72deluxe
COM is indeed an interesting beast and very useful. Being able to integrate
other programs into yours, relying on COM to do so is very helpful. I don't
know of a way on Linux or OSX to embed a word processor to work on documents
but never show the user (but this might be due to my lack of knowledge about
those systems - I would welcome being enlightened). I once wrote something
that processed a plethora of Word documents using Word via COM to extract data
from them and shove it into a database.

I can't fully remember how I did it but I recall type libraries, including
generated headers etc. but I remember being impressed by it.

And as you state, all calls within Windows rely on COM. Stop the RPC service
and observe as your system becomes unusable.

~~~
icebraining
LibreOffice uses UNO, which is their own version of COM:
[http://en.wikipedia.org/wiki/Universal_Network_Objects](http://en.wikipedia.org/wiki/Universal_Network_Objects)

~~~
castell
Mozilla has its own XPCOM too:
[http://en.wikipedia.org/wiki/XPCOM](http://en.wikipedia.org/wiki/XPCOM)

Read the Criticism section:

 _" XPCOM adds a lot of code for marshalling objects between different usage
contexts (e.g. different languages). This leads to code bloat in XPCOM based
systems. This was one of the reasons why Apple forked KHTML to create the
WebKit engine (which is now used in several web browsers in various forms,
including Safari and Google Chrome) over the XPCOM-based Gecko rendering
engine for their web browser.

The Gecko developers are currently trying to reduce superfluous uses of XPCOM
in the Gecko layout engine. This process is commonly referred to as
deCOMtamination within Mozilla."_

But my original top comment was about RPC and the compound document format
(OLE as example), not about COM.

Apple had a compound document format too (death since 1997):
[http://en.wikipedia.org/wiki/OpenDoc](http://en.wikipedia.org/wiki/OpenDoc)

 _" OpenDoc's flexibility came at a cost. OpenDoc components were invariably
large and slow. For instance, opening a simple text editor part would often
require 2 megabytes of RAM or more, whereas the same editor written as a
standalone application could be as small as 32 KB. This initial overhead
became less important as the number of documents open increased, since the
basic cost was for shared libraries which implemented the system, but it was
large compared to entry level machines of the day. Many developers felt that
the extra overhead was too large, and since the operating system did not
include OpenDoc capability, the memory footprint of their OpenDoc based
applications appeared unacceptably large. In absolute terms, the one-time
library overhead was approximately 1 megabyte of RAM, at the time half of a
low-end desktop computer's entire RAM complement.

Another issue was that OpenDoc had little in common with most "real world"
document formats, and so OpenDoc documents could really only be used by other
OpenDoc machines. Although one would expect some effort to allow the system to
export to other formats, this was often impractical because each component
held its own data. For instance, it took significant effort for the system to
be able to turn a text file with some pictures into a Microsoft Word document,
both because the text editor had no idea what was in the embedded objects, and
because the proprietary Microsoft format was undocumented and required reverse
engineering.

It also appears that OpenDoc was a victim of an oversold concept, that of
compound documents. Only a few specific examples are common, for instance most
word processors and page layout programs include the ability to include
graphics, and spreadsheets are expected to handle charts. [...]

But certainly the biggest problem with the project was that it was part of a
very acrimonious competition between OpenDoc consortium members and Microsoft.
The members of the OpenDoc alliance were all trying to obtain traction in a
market rapidly being dominated by Microsoft Office. As the various partners
all piled in their own pet technologies in hopes of making it an industry
standard, OpenDoc grew increasingly unwieldy. At the same time, Microsoft used
the synergy between the OS and applications divisions of the company to make
it effectively mandatory that developers adopt the competing OLE technology.
In order to obtain a Windows 95 compliance logo from Microsoft, one had to
meet certain interoperability tests which were quite difficult to meet without
adoption of OLE technology, even though the technology was largely only useful
in integrating with Microsoft Office. OpenDoc was forced to create an
interoperability layer in order to allow developers to even consider adoption,
and this added a great technical burden to the project."_

And there were others too:
[http://en.wikipedia.org/wiki/Compound_document](http://en.wikipedia.org/wiki/Compound_document)

------
hp
The Linus profile is mostly a red herring here, because it is 1) a bad
benchmark with a bunch of blocking round trips and 2) mostly profiling the
gdbus bindings which are just one binding.

For a more in-depth performance discussion of dbus, check out
[http://lists.freedesktop.org/archives/dbus/2012-March/015024...](http://lists.freedesktop.org/archives/dbus/2012-March/015024.html)

Above noted on linux-kernel here:
[http://thread.gmane.org/gmane.linux.kernel/1930358/focus=193...](http://thread.gmane.org/gmane.linux.kernel/1930358/focus=1939533)

~~~
zurn
The PDF link is sadly dead... (The ml message references heavily to the PDF)

------
christianbryant
And [https://lwn.net/Articles/636997/](https://lwn.net/Articles/636997/)

But, let's stop bitching and come up with alternatives, solutions, or closure.
I'm tired of hearing about KDBUS (all love to gregkh) and DBUS.

~~~
userbinator
_alternatives, solutions, or closure_

The link you posted has what I think is the right one:

 _The problem is that probably this code is not needed at all._

To me, DBUS seems like a solution looking for a problem: The majority of the
time there is no real reason to have such a complex layer of abstraction for
IPC, when the system already provides much simpler alternatives.

~~~
bitwize
_To me, DBUS seems like a solution looking for a problem: The majority of the
time there is no real reason to have such a complex layer of abstraction for
IPC, when the system already provides much simpler alternatives._

Then you need to read and reread and reread Havoc Pennington's posts on why he
wrote d-bus in the first place:

[https://news.ycombinator.com/item?id=8648995](https://news.ycombinator.com/item?id=8648995)

[https://news.ycombinator.com/item?id=8649459](https://news.ycombinator.com/item?id=8649459)

D-Bus provides a service discovery system that lets you not only send messages
to named endpoints, but also monitor whether there's something on the other
end of the endpoint, and if necessary, request that it be started before you
send the message in a non-racy fashion.

It also provides a common, typed, structured, discoverable, auditable
serialization layer which is necessary for doing API-style IPC, and a benefit
besides since when you "just use sockets" you have to do all the serialization
and deserialization yourself, leading to possibly buggy code.

Oh, and it also does multicast, which no other commonly used Unix IPC can do.

People who say "I don't see the problem that d-bus solves" aren't looking hard
enough and/or are ignorant of the issues of how software gets developed in a
modern environment. D-bus solves _many_ problems with IPC under Linux, and
makes developers' lives a whole lot easier when they have to connect with
other applications or services.

~~~
chris_wot
Is there a document that explains the why's and wherefores of dbus? Aside from
those posts.

~~~
teddyh
You can probably find it here:

[https://wiki.freedesktop.org/www/Software/dbus/](https://wiki.freedesktop.org/www/Software/dbus/)

~~~
tobik
Is there a paper/article that discusses and categorizes the differences
between IPC mechanism? Their pros and cons etc. What mechanisms are there?
What is used on Windows? OS X/iOS? QNX? BeOS? Android? Solaris? etc.

The Wikipedia article on IPC [https://en.wikipedia.org/wiki/Inter-
process_communication](https://en.wikipedia.org/wiki/Inter-
process_communication) is really incomplete.

~~~
johnny22
this article compare kdbus (and thus dbus) to android's binder:
[http://kroah.com/log/blog/2014/01/15/kdbus-
details/](http://kroah.com/log/blog/2014/01/15/kdbus-details/)

------
acqq
The Linus' explanation is here:

[http://thread.gmane.org/gmane.linux.kernel/1930358/focus=193...](http://thread.gmane.org/gmane.linux.kernel/1930358/focus=1937442)

"Just to make sure, I did a system-wide profile (so that you can actually see
the overhead of context switching better), and that didn't change the
picture."

"The real problems seem to be in dbus memory management (suggestion: keep a
small per-thread cache of those message allocations) and to a smaller degree
in the crazy utf8 validation (why the f*ck does it do that anyway?), with some
locking problems thrown in for good measure. "

------
kasabali
I still fail to see why we need DBUS in kernel, so please correct me and clear
my lack of understanding. It is claimed userspace dbus performance is bad, but
people want to use it in a much larger scale, so what I deduce from that is
there is nobody using dbus on that scale right now, since its performance
sucks.

Reason for getting kdbus into kernel, and not a generic ipc but specifically
kdbus, is because user space that depends on dbus will continue to work with
kdbus.

What I still don't understand is, what is the point of keeping compatibility
of currently non-existing userspace, if software that depend on dbus right now
doesn't need performance enhancements and can just keep using regular dbus,
and software that need high performance bus doesn't even use dbus since it's
slow? Under these circumstances, why they insist on making kdbus strictly a
dbus implementation but not a more generic ipc mechanism?

------
xorcist
Now is the time for Linus to disappear for a few days, and re-emerge with a
sane IPC design that both binder and dbus can build on.

Bootstrapped in itself, of course :) (I have no idea what that would even mean
in this context. But hey, it's all wishful thinking, right?)

------
kzrdude
But this doesn't benchmark systemd's userland dbus implementation. That one is
much better, right?

~~~
codys
I hope so, but no one except systemd is using it yet. Last time I checked the
sd-bus.h header isn't installed with systemd yet due to the systemd folks
concern that that API for sd-bus wasn't finalized yet.

It would be very interesting to benchmark it against the glib implementation
(which most people, even if they aren't using glib directly, are using these
days).

------
phjesusthatguy3
Can someone point out where the quote comes from? I'm not a DBus developer but
I'd like to know why the subject is what it is, and not something like "Issues
with capability bits and meta-data in kdbus" or "Kdbus needs meaningful
review".

~~~
hp
It's Linus being hyperbolic. Read follow-ups in the linux-kernel thread to see
more specifically what these benchmarks mean.

------
GutenYe
What's potato, as said in the mailing list?

~~~
Sharlin
Internet slang for a device, often a camera, of poor quality or outdated
technology.

[http://www.urbandictionary.com/define.php?term=potato&defid=...](http://www.urbandictionary.com/define.php?term=potato&defid=6321730)

------
thuffy
Let's talk about the elephant in the room.

This is quite obviously the NSA adding massive additional attackable surface
area to the Linux kernel.

They did it to SSL/TLS, they did it to many others, they are now doing it in
earnest to Linux.

First systemd, now kdbus. Keep this up and OpenBSD, or something else, is
going to kill the Swiss cheese that Linux is becoming. I love Linux, so it is
sad to see this happening.

~~~
antocv
systemd, kdbus and pulseaudio - wow r00t lol. kthxbai /nsa.

Seriously, its already a huge mess to administate a GNU/Linux with systemd and
dbus and polkit already as it is. Its the only parts of my system where I feel
I cant really inspect its inner workings. Its a black box.

~~~
thuffy
I absolutely agree it is a mess for other reasons as well; but let us attack
it on all angles, and I think security is the most important one these days.

------
amelius
Should be rewritten in Erlang or Go, if you'd ask me.

~~~
JosephRedfern
Are you missing '/s'?

------
lmedinas
Another great Linus quote:

"The people who talk about how kdbus improves performance are just full of
sh*t."

[http://thread.gmane.org/gmane.linux.kernel/1930358/focus=193...](http://thread.gmane.org/gmane.linux.kernel/1930358/focus=1939166)

~~~
pmr_
You are not quoting the part where he shows his benchmark results, analysis of
the problem, and suggested solution. I don't necessarily agree with Linus'
tone or choice of words, but what you are doing is far worse. I'm not even
sure if you are trying to be sarcastic, but that does not even matter: You are
not contributing anything either way.

~~~
lmedinas
I have made an observation from Linus comments which is sad specially to
people who worked on KDbus all this time. If you think your comment helped
anything maybe you should consider making them when you see comments like the
next one:

quote: """ amelius 3 hours ago

Should be rewritten in Erlang or Go, if you'd ask me. """

otherwise you are full of sh*t. :)

