
Bus1 – Kernel Message Bus - eeyepieinthesky
http://www.bus1.org/bus1.html
======
ktRolster
The API seems bad to me, the ioctl() is ridiculously overloaded (disconnect,
connect, read, and write all go through ioctl(), for example). As an API, this
doesn't pass Linus' "good taste" test. As a solution, it's more complicated
than necessary.

On messaging in general: this is yet another IPC api from RedHat. They really
want to get an IPC api into the kernel over there. However, their proposals
seem 'ignorant,' in the sense that plenty of IPC research and work has been
done over the last half century (or more?), and their proposals seems focused
on their own small use cases. They seem unaware of how others have tried to
solve the IPC problem.

On why Linus doesn't want to integrate it: no kernel team is going to want to
have yet another poorly-designed IPC stack to maintain even though hardly
anyone uses it. That's been done, and IPC in Unix is a mess as a result. Any
sane kernel dev would be resistant to this, unless the new proposal is
absolutely beautiful, and you can look at it and say, "yeah, that is really
great."

RedHat should go out, investigate the research that has been done, become
experts on the topic. Learn _everyone 's_ IPC problems, not just they problems
they have in their own insular community. Only then create an API that
actually is in good taste.

Then begin the political work of getting people to adopt it. Start with the
BSD teams. Start with the hobbyist OS devs at osdev.org. When the whole world
agrees that it is a good thing, then Linus will put it in the kernel too.

~~~
SFJulie
In term of linux design ioctl are natural candidates as an underlying
mechanism for multi-process communication.

Quoting Steven Doyle

ioctl can be guaranteed by the kernel to be atomic. If a driver grabs a
spinlock during the ioctl processing then the driver will enter atomic context
and will remain in atomic context until it releases the spinlock.
[http://lwn.net/Articles/274695/](http://lwn.net/Articles/274695/)

cooperative multitasking in critical functioning requires thread safety and
atomicity.

What, bothers me is what the fuck is an iovec! That is the most important part
of the norm and yet it is not defined. My fear is that to accommodate
«industry grade level of developers» they will use dynamically allocated
structures vs fixed size structure. And we all know that malloc in user space
is already the door to hell, but in kernel space, it is a direct nightmare.

What cringes me too is it is a distributed system (and I played quite a lot
with them) and they say they have tackled the problem of global ordering of
the messages. Well, be it on the network, be it on silicium, I never saw
anyone achieve this feature.

I fear they are over promising and they will under-deliver

~~~
mfukar
> ioctl can be guaranteed by the kernel to be atomic. If a driver grabs a
> spinlock during the ioctl processing then the driver will enter atomic
> context and will remain in atomic context until it releases the spinlock.

That's a red herring though. A new syscall implementation can also provide the
same guarantees.

> What, bothers me is what the fuck is an iovec!

Pretty sure they're referring to Berkeley-style UIO. Any discussion about
those will soon devolve in other types of IPC, in my experience..

> What cringes me too is it is a distributed system (and I played quite a lot
> with them) and they say they have tackled the problem of global ordering of
> the messages. Well, be it on the network, be it on silicium, I never saw
> anyone achieve this feature.

They are claiming there's no global synchronization and a global order. That
is textbook impossible, and leads me to believe what's written on the site is
either waaay misunderstood or waaay manipulative.

~~~
tomegun
> They are claiming there's no global synchronization and a global order.

Need to update your textbook ;) [http://research.microsoft.com/en-
us/um/people/lamport/pubs/t...](http://research.microsoft.com/en-
us/um/people/lamport/pubs/time-clocks.pdf)

In particular, what we did is described here:
[https://github.com/bus1/documentation/wiki/Message-
ordering](https://github.com/bus1/documentation/wiki/Message-ordering)

If anything is unclear or misleading, please let me know and I'll try to
clarify.

~~~
mfukar
I've read both, thanks. I clearly remember the fact the total ordering is
"somewhat arbitrary" in Lamport's own words, which is what I pointed out here
[[https://news.ycombinator.com/item?id=12803907](https://news.ycombinator.com/item?id=12803907)],
too.

I admit I haven't read the implementation to see what kind of bounds you
derive, and I couldn't find them in the wiki either. So, I think I'll go with
"accidentally exaggerated" instead of "manipulative".

~~~
tomegun
"[S]omewhat arbitrary" is a correct description. We take something that is
fundamentally partially ordered (real-world events that may happen exactly at
the same time), respect the partial order and extend it to a total order. The
extension is arbitrary, but I fail to see the problem with that, or how it
contradict anything we wrote?

Could you explain what bounds you are interested in and in what way you think
anything is exaggerated? I would like to update the docs if necessary.

~~~
mfukar
I could. I would rather put it in an email or PR. I'll try to put it together
as soon as I have some time.

~~~
tomegun
Thanks.

------
CalChris
A discussion of where Bus1 fits in the bestiary of Linux IPC proposals:

 _Bus1: a new Linux interprocess communication proposal_

[https://lwn.net/Articles/697191/](https://lwn.net/Articles/697191/)

~~~
agumonkey
Someone on twitter suggested that Apple libxpc should be used. I never heard
of it but if someone did, I'll be happy to know more about it.

~~~
ktRolster
Here you go:
[https://developer.apple.com/library/content/documentation/Ma...](https://developer.apple.com/library/content/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingXPCServices.html)

It is higher level, though, it is not something you would integrate into the
kernel (like bus1 intends to be). OSX has decent IPC though, since it is based
on the Mach research kernel from CMU, which was kind of based around the
concept of messaging to begin with.

Applescript has interesting IPC concepts as well.

------
Matthias247
Although I have brought a lot of high level RPC/IPC systems into production
I'm really having a hard time imagining where I could use this.

As far as I understand it gives me a bit more high level features (security,
multicast) compared to other IPC primitives (sockets, pipes, ...). However
once I'm going to use this my application (or high-level IPC framework) will
be locked to Linux and no longer be portable to other platforms. So for any
applications that should be at least halfway portable I would prefer something
that works on the more common primitives (most likely sockets) and build
something more powerful in a cross-plattform way on top ofi it (like HTTP,
grpc, Thrift, ...).

A full featured low-level framework makes sense if I have a whole set of
applications on top of it which is not intended to be portable and uses it
exclusively. Something like the Android/iOS/... platform. But the current ones
already have settled on their infrastructure (e.g. on Binder), and even if new
ones come up there is a high possibility that they wouldn't like at least
something in Bus1 and instead come up with their own solution.

------
jojo3000
There was also a presentation on Bus1 on the systemd.conf 2016:
[https://www.youtube.com/watch?v=6zN0b6BfgLY](https://www.youtube.com/watch?v=6zN0b6BfgLY)

------
jacquesm
Previous thread on HN:

[https://news.ycombinator.com/item?id=10708177](https://news.ycombinator.com/item?id=10708177)

------
ndesaulniers
So, Binder? And man, when people call ioctl a bastardized syscall... looks
like 9 syscalls in one!

~~~
TD-Linux
... that's pretty much the point of ioctl - a syscall multiplexer in the
context of a fd. 9 is pretty low - check out /dev/cdrom or /dev/dri/* if you
want to see a lot.

------
ausjke
how is this related to systemd(not just technically)? if anything.

------
solidsnack9000
> Also in the case where there can be no causal relationship, we are
> guaranteed

> a global order. In case two events happend concurrently, there can never be

> any inconsistency in which occurred before the other. By way of example,

> consider two peers sending one message each to two different peers, we are

> guaranteed that both the recipient peers receive the two messages in the
> same

> order, even though the order may be arbitrary.

How could that possibly work?

~~~
mfukar
With their "synchronize-local-clocks" approach, it doesn't.

They are using Lamport's algorithm to synchronize the clocks. However,
Lamport's approach creates a _partial_ ordering, and to make that a _total_
ordering you need some mechanism to break "ties". For instance, the PID, or
whatever.

The catch is that the relationship derived from this arbitrary tie-breaking
mechanism has nothing to do with causality, and therefore the total order it
imposes is only an artifact of the mechanism chosen.

Finding a tie-breaking mechanism that corresponds to the sending events is, in
Lamport's own words, "not trivial".

~~~
tomegun
Indeed that is how we break ties (not exactly the PID, but you get the idea).

The reason this works is that the only time we can have a tie is if there can
be no causality between the events. I.e., the two sending events happen
concurrently: the two ioctl calls overlap in time, so there would be no way
for one to have caused the other.

What problem do you see with this?

~~~
mfukar
I foresee a problem where people confuse the wording of "total order on all
messages" in the wiki to mean there is a "global total order" \- in other
words, that bus1 solves distributed systems and we can all go home - and
building buggy systems on this assumption. I'm not saying the concept is
flawed or the implementation buggy, or anything like that.

PS. Neil Brown in the LWN article already conflates "global" and "total"
order.

~~~
jasonwatkinspdx
I'd suggest reading the hybrid logical clocks paper.

~~~
mfukar
I have - I'm actually working with it on a multi-version IPC provider (totally
unrelated to bus1 & friends). Is it relevant here? I know they're the latest
and greatest, but they're not without problems either.

~~~
jasonwatkinspdx
Based on your comments about lamport clocks, etc, I thought you'd find it
interesting.

~~~
mfukar
It is indeed. Cool applications too. I remember at least GUN - a graph db
engine - uses them to great effect, and its author hangs out here, iirc.

