
Cap'n Proto 0.8: Streaming flow control, HTTP-over-RPC, fibers - kentonv
https://capnproto.org/news/2020-04-23-capnproto-0.8.html
======
subhobroto
Kenton,

THANK YOU for not only designing Cap’n Proto but also continuing to work on
it.

Before today, I had slight concern that in the face of Sandstorm, Cap’n Proto
would fall in priority. Perhaps Cloudflare Workers are now making a great case
for it?

My usecase has been Cap’n Proto w/ Python. Being able to do RPC is icing on
the cake.

I will be looking into streaming fields of entities (classes) soon - the idea
being decentralized microservices have local knowledge on how to transform and
output a field and other modules/microservices just reach out to them and ask
for the same.

A silly question: do you forsee any issues getting Cap’n Proto 0.8 to work
over a completely locked down Docker environment that only allows HTTPS proxy
as the connection between nodes that use Cap’n Proto RPC?

PS: Thoughts on why Cap’n Proto did not win over MessagePack? MessagePack had
a better JS implementation? Is it the convenience of not needing to define a
schema when using MessagePack vs Cap’n Proto schema requirement?

~~~
kentonv
> A silly question: do you forsee any issues getting Cap’n Proto 0.8 to work
> over a completely locked down Docker environment that only allows HTTPS
> proxy as the connection between nodes that use Cap’n Proto RPC?

If it supports WebSocket, it should be relatively easy to layer Cap'n Proto
RPC on top of that. Alternatively, some proxies allow full-duplex HTTP
(request and response bodies streaming simultaneously), which could also be
enough to bootstrap a connection on top of -- but in practice that tends to
run into a lot of problems.

Otherwise, that's tough. HTTP is fundamentally a one-way, FIFO request-
response protocol, whereas Cap'n Proto is multi-directional and asynchronous.
Starting a separate HTTP connection for each call -- with connections
initiated in both directions -- would be pretty ugly and have lots of issues
with synchronization and routing.

> PS: Thoughts on why Cap’n Proto did not win over MessagePack? MessagePack
> had a better JS implementation? Is it the convenience of not needing to
> define a schema when using MessagePack vs Cap’n Proto schema requirement?

I don't really consider MessagePack a direct competitor to Cap'n Proto. It's
more of a competitor to JSON. Schema-driven vs. non-schema-driven changes
everything about how you use a serialization.

A more apples-to-apples comparison in Protobuf. Protobuf is much more popular
for a simple reason: It has had a lot more engineering investment, leading to
mature implementations in more languages and lots of great tooling that capnp
doesn't have (yet). No amount of clever design can beat that.

~~~
anderspitman
> If it supports WebSocket, it should be relatively easy to layer Cap'n Proto
> RPC on top of that.

How would the new streaming functionality be implemented over WebSockets? WS
has no flow control. You can check bufferedAmount but I found it to be fairly
useless[0]. Maybe it's improved in the last 1.5 years or I was using it wrong.

> HTTP is fundamentally a one-way, FIFO request-response protocol, whereas
> Cap'n Proto is multi-directional and asynchronous. Starting a separate HTTP
> connection for each call -- with connections initiated in both directions --
> would be pretty ugly and have lots of issues with synchronization and
> routing.

I think you could get a long way using HTTP/2 and server-sent events.

[0]:
[https://github.com/websockets/ws/issues/492](https://github.com/websockets/ws/issues/492)

EDIT: added link to bufferedAmount issue

~~~
kentonv
> How would the new streaming functionality be implemented over WebSockets? WS
> has no flow control.

Sure it does. WS is just a framing protocol on top of a regular TCP
connection. Though it sounds like you're not talking about the protocol so
much as the JavaScript API, which perhaps doesn't give you enough visibility
into the underlying TCP socket state.

But a BBR-like algorithm could still work. Basically (massively
oversimplified):

1) Determine the connection latency based on the fastest response you've seen.

2) Determine the connection throughput by tracking the highest throughput
you've ever seen.

3) Set your window size to be a little bit more than latency * throughput.

This "should" saturate the connection with just a little bit of buffering. If
it doesn't saturate the connection, then your measured throughput will go up
until it does.

This way there is no need to ask the OS or browser to tell you how much is
buffered...

Disclaimer: I have yet to actually implement something like this. Obviously,
it gets tricky in the details.

~~~
anderspitman
> Sure it does. WS is just a framing protocol on top of a regular TCP
> connection. Though it sounds like you're not talking about the protocol so
> much as the JavaScript API, which perhaps doesn't give you enough visibility
> into the underlying TCP socket state.

Every WS implementation I've ever seen is nonblocking on both sides. Are you
aware of any that aren't? I'm not even sure the spec allows for that.

But yes you are correct that you can implement more advanced algos on top.

EDIT: Sounds like blocking implementations do exist, but unfortunately I'm
constrained to a browser environment, and all the browsers are nonblocking.

~~~
kentonv
Heh. The WebSocket implementation I wrote for KJ does in fact provide
backpressure (the send() method returns a promise that resolves when it's a
good time to send the next message). I guess I'm surprised to hear that most
don't...

~~~
anderspitman
I think it's more likely I've just been too deep in JS-land. Does KJ send() do
any internal buffering, or wait for the OS to tell it to send more?

~~~
decentralised
I've been looking into streams in detail recently because I'm preparing for
the OpenJSF exam... maybe this is relevant to you too:
[https://nodejs.org/es/docs/guides/backpressuring-in-
streams/](https://nodejs.org/es/docs/guides/backpressuring-in-streams/)

~~~
anderspitman
Thanks for the link. I'm actually quite familiar with that article. It's been
very useful for me when designing omnistreams. You may find some of the links
on the bottom of this page useful:

[https://github.com/omnistreams/omnistreams-
spec](https://github.com/omnistreams/omnistreams-spec)

------
malkia
I'm using grpc/protobuf for the fact that it supports plenty of languages:
C++, C#, Python, Dart, even Rust and others (Node, PHP, etc.). What is the
state of CapnProto when comes to this? I've tried to look at the C# project,
but the page was missing on github.

~~~
kentonv
Admittedly, this is a huge weakness of Cap'n Proto. The C++ implementation
(which is the one I use and work on personally) is mature. There are pretty
solid implementations in Rust and Go, too. But it falls off after that, with
most implementations being serialization-only and at various levels of
(im)maturity.

There's not a lot I can do about this. Cap'n Proto adoption doesn't directly
drive revenue for anyone in particular, so I can't hire an army of engineers
to throw at it... People who want better Cap'n Proto support in each language
need to step up to help make it happen.

One thing I am looking at doing is making it easier for per-language
serialization implementations to bind to the C++ RPC implementation. This
might make a lot of sense, since the serialization implementations have wide
APIs but shallow implementation details, while the RPC implementation is a
pretty narrow API with very complex implementation. And it turns out Cap'n
Proto messages are super-easy to pass between languages since the in-memory
format is by design the same across languages -- passing around byte buffers
tends to be pretty easy.

~~~
malkia
A big selling point for me, would be built-in IPC mechanism, instead of TCP,
or UDP - be it mailslots (Windows), named pipes, shared files, etc. - does not
matter. Now there are some projects that implement IPC over gRPC, but not part
of the actual project.

Why I'm asking for this - for the simple reason - I don't want to deal with
port allocation on a CI.

~~~
kentonv
Cap'n Proto works great over unix sockets. For sandboxing usecases in
Sandstorm and Cloudflare Workers, I've commonly used it over anonymous
socketpairs -- definitely no ports involved there. :)

In fact, you can adapt the RPC system to operate over any kind of byte stream
transport pretty easily, by implementing the kj::AsyncIoStream interface. Or
if you already have a standard file descriptor (or iocp-compatible HANDLE in
Windows), you can use that.

One fancier thing that's still on the roadmap is shared-memory IPC. Cap'n
Proto's zero-copy serialization was really built for this, but so far for all
my real-world projects, Unix sockets have been fast enough, so I haven't been
forced to full implement a shared memory transport yet. Maybe soon?

~~~
zokier
It's bit frivolous question, but would it be easy to use stdin/stdout as
transport for capnp?

~~~
kentonv
Sure, you could do that. You'd need to write a little shim to bind separate
input stream and output stream FDs into a single AsyncIoStream but that
shouldn't be hard.

------
malkia
Kenton, have you looked into what Fuchsia is doing with FIDL?
[https://fuchsia.googlesource.com/docs/+/ea2fce2874556205204d...](https://fuchsia.googlesource.com/docs/+/ea2fce2874556205204d3ef70c60e25074dc7ffd/development/languages/fidl/tutorial.md)
(not sure if recent page, but good enough)

Just wondering about your opinion... Thanks!

~~~
kentonv
I've heard it mentioned but haven't had the chance to look closely.

------
kortex
I just want to say, I've been following Capn for a few years now, excellent
work, keep it up! I feel it's a better architecture overall than protobuf and
wish I had more time to contribute to the project.

~~~
kentonv
Thanks!

------
beagle3
kenton, I admire your work on Cap'n Proto and Sandstorm.

Question about time traveling promises - I read the documentation, and it
sounds to me like - taken to the logical extreme, that would effectively mean
an interpreter - because at some point, you don't just want to use return
values from one call as parameters in the other - you'd also want e.g. to
build a one-network-roundtrip "create-if-not-exists" call from "exists(name)"
and "create(name)", which would require the second call's _activation_ to be
dependent on the first call's result (rather than its parameters) - which is
just a small change.

But if you consider that, and error handling, and a few other relatively
simple cases, you quickly end up with an informally specified, bug ridden,
incomplete implementation of Emacs Lisp inside the RPC implementation.

So, my question is - where do you draw the line, and how do you decide to draw
it? I don't believe there's a right answer, but I wonder about your
philosophy.

~~~
kentonv
Indeed, the comments allude at this possibility (note the TODO):

[https://github.com/capnproto/capnproto/blob/77f20b4652e51b5a...](https://github.com/capnproto/capnproto/blob/77f20b4652e51b5a7ebda414e979e059a6c7c27c/c++/src/capnp/rpc.capnp#L1050-L1093)

But in practice, the only kind of "script" we support currently is following a
chain of named fields followed by invoking a new RPC method, unconditionally,
as in:

    
    
        fooResult = cap.foo();
        quxResult = fooResult.bar.baz.qux();
    

This seems to satisfy the vast majority of real-world use cases.

I would only add other operations if I identified some use case where it turns
out to be a really big performance win. So far I haven't seen any.

------
omginternets
Does anybody know if the Go implementation is still maintained? The author is
unresponsive and there haveNot been any updates in quite a while.

[https://github.com/capnproto/go-capnproto2](https://github.com/capnproto/go-
capnproto2)

~~~
anderspitman
There was a commit yesterday? But it does seem to have slowed down.

~~~
omginternets
Go figure, just as soon as I post my concern to HN! Oh well, here’s to hoping
development picks up again :)

Edit: unfortunately the commit seems to be to a funding.yml. While I really
can’t afford to contribute myself, I hope someone does.

------
IshKebab
Impressive work! I just wish it had a less awkward API for serialisation and
deserialisation. Compare the Rust example for Capnp with Protobuf:

[https://github.com/capnproto/capnproto-
rust/blob/master/exam...](https://github.com/capnproto/capnproto-
rust/blob/master/example/addressbook_send/addressbook_send.rs)

[https://docs.rs/prost/0.6.1/prost/trait.Message.html#method....](https://docs.rs/prost/0.6.1/prost/trait.Message.html#method.encode)

(Ok I couldn't actually find an example for Prost because all you do is create
a normal Rust `struct` and call `encode()` on it.)

~~~
zenhack
Yeah, this is kinda the cost of not having an encode/decode step. The Haskell
implementation (of which I am the primary author) provides a higher-level API
with "normal" data types for cases where performance requirements aren't
stringent enough to merit the extra burden on the developer. I'm mostly
interested in RPC, so I rarely use the low-level API myself...

------
anderspitman
This is awesome. The new streaming stuff is particularly interesting to me,
and it's very impressive that you managed to implement it with no protocol
changes. I have a couple questions. Please forgive any misconceptions as I've
never used capnproto myself, since all the streaming I've done has to work in
the browser, and as far as I know capnproto doesn't work over WebSocket or
WebRTC transports. But I've long been impressed with and inspired by capnproto
and sandstorm.

Main question: is there a reason you opted for traditional window flow control
a la TCP, as opposed to "request-N" style like in reactive streams[0] (see
rsocket[1] for a great implementation)?

So with request-N, a server->client stream would look something like this:

interface MyInterface {

    
    
      streamingCall @0 (callback :Callback) -> (requester :Requester);
    
      interface Callback {
        sendChunk @0 (chunk :Data) -> ();
      }
    
      interface Requester {
        request @0 (n :int) -> ();
      }

}

And the server will only ever send as much data as has been requested by the
client with requester calls. This results in really elegant flow control that
takes into account both the network, and the client's capacity to consume,
without the necessity of tracking windows. The receiver simply calls
request(1) each time it processes a message. If you want a buffer you can just
start with an assumed N=10, 100 etc.

I've found this worked really well when implementing omnistreams[2], which is
basically a very thin streaming/multiplexing layer for WebSockets, since WS
doesn't have any flow control. (fibridge[3] is a good example of it in
action). I started with a window-style but once I learned about reactive
streams the request model was much easier to reason about for me.

[0]: [https://github.com/reactive-streams/reactive-streams-
jvm](https://github.com/reactive-streams/reactive-streams-jvm)

[1]: [https://github.com/rsocket/rsocket](https://github.com/rsocket/rsocket)

[2]: [https://github.com/omnistreams/omnistreams-
spec](https://github.com/omnistreams/omnistreams-spec)

[3]: [http://iobio.io/2019/06/12/introducing-
fibridge/](http://iobio.io/2019/06/12/introducing-fibridge/)

~~~
kentonv
Hmm, to me, what you describe still sounds window-based, it's just that the
receiver chooses the window size. The question then is: _how_ does the
receiver decide on a good size? If it chooses a window that is too small, it
won't fully utilize the available bandwidth. If it chooses one too big, it'll
create queuing delay.

This is a very hard question to answer and many academic papers have been
written on the subject. But the strategies I thought about seemed easy enough
to compute on the sender side, and the sender is the one that ultimately needs
to know the window size in order to decide when to send more data.

But I can totally imagine that there are applications where the receiver knows
better how much data it wants to request at a time. You can, of course, use a
pattern like you suggest to accomplish that, without any help from the RPC
system.

Regarding WebSockets, you could totally make Cap'n Proto RPC run over
WebSocket. It wouldn't even be much work to hook up the C++ RPC implementation
to KJ's HTTP library which supports WebSocket. The harder problem is that
there isn't currently a JavaScript implementation of capnp RPC... :/

~~~
anderspitman
> If it chooses a window that is too small, it won't fully utilize the
> available bandwidth. If it chooses one too big, it'll create queuing delay.

Yeah, that's a valid concern, and one I've run into in practice.

It's true that in environments where the server has access to TCP socket
information, traditional windowing will have an advantage for performance. You
may even be able to do some sort of detection as to how saturated the
interface is from other processes.

As I see it the main advantage of the pull-based backpressure I described is
the simpler mental model, making it easier to reason about and implement. So
in environments with limited system information for the sender (ie WebSockets,
which knows basically nothing about how full the buffers are), you don't have
to pay the extra complexity cost with no benefit.

~~~
kentonv
Hmm, but if the puller doesn't actually know what value of `n` is ideal, then
what benefit is there to a pull-based model vs. having the pusher choose an
arbitrary `n`?

~~~
anderspitman
The network isn't the only resource in play. The puller is hypothetically more
aware of the size of it's buffers, processing capacity, internet connection
speed, etc. But again, to me the primary advantage is the mental model. For
omnistreams the implementation ended being almost the same as the ACK-based
system I started with, but shifting the names around and inverting the model
in my head made it much easier to work with.

~~~
kentonv
Fair enough.

FWIW, Cap'n Proto's approach provides application-level backpressure as well.
The application returns from the RPC only when it's done processing the
message (or, more precisely, when it's ready for the next message). The window
is computed based on application-level replies, not on socket buffer
availability.

My experience was that in practice, most streaming apps I'd seen were doing
this already (returning when they wanted the next message), so turning that
into the basis for built-in flow control made a lot of sense. E.g. I can
actually go back and convert Sandstorm to use streaming without actually
introducing any backwards-incompatible protocol changes.

~~~
anderspitman
Ah I think I misread the announcement to mean you were using the OS buffer
_level_ information. But if I understand correctly you're just using the
buffer _size_ as a heuristic for the window size, then doing all the logic at
the application level?

If that's the case, then implementation-wise these approaches are probably
very similar, and window/ACK is the normal way of doing this, and also the
pragmatic approach in your case.

~~~
kentonv
Yep. I probably should have gone into more detail on that, and about the
problem of slow-app-fast-connection. Oh well.

~~~
dtaht99
I would hope you'd be amused by:
[https://blog.apnic.net/2020/01/22/bufferbloat-may-be-
solved-...](https://blog.apnic.net/2020/01/22/bufferbloat-may-be-solved-but-
its-not-over-yet/)

and I am curious if you have considered a fq_codel-like approach to message
queuing? Sending a whole socketbuf kind of scares me.

------
_pmf_
I'd really like to read an experience report of using the promise pipelined /
time-travelling RPC mechanism in production.

------
xyz-x
For me, new to cap'n'proto, this blog post doesn't cut the mustard because of
multiple red flags:

\- to start, it's self-congratulatory at stating that streaming already exists
"[via] promise pipelining" — but that has a name, it's called polling, not
streaming. Making asynchronicity explicit doesn't make a protocol streaming.

\- in the same paragraph an "object-capability model" is introduced as a
concept, but not explained

\- second paragraph: "think of this like providing a callback function in an
object-oriented language", when it should be "in a functional programming
language" (callbacks aren't OOP, they are per definition functional
programming)

\- second paragraph, vocab: what's a "temporary RPC object"? Contrary to the
precise, albeit unexplained vocabulary, in the first pragraph, this is vague.

\- creating examples with "MyInterface" being the service shows a lack of
creativity and a rather low ability to communicate; is this service on the
server side, or the client side? Noone knows, and "Callback" is not a good
name for a callback, it should be "SendEmailWithData" or something that makes
sense.

\- `sendChunk @0 (chunk :Data)` doesn't make sense to a beginner without
explaination; what's `@0` and why do I care?

Here's what made me write this comment despite the threshold annoyance in
commenting:

\- Why name a _message_ , `Data` when it's clearly NOT a chunk of data in
layer-3/layer-4, but rather a layer-7 artifact with retries and checksumming
implemented to ensure complete and accurate message delivery?

Finally, the articles goes on and discusses control flow via a proxy variable;
your OS'es TCP send buffer size. But the linked Wikipedia article states:

> because the protocol can only achieve optimum throughput if a sender sends a
> sufficiently large quantity of data before being required to stop and wait
> until a confirming message is received from the receiver, acknowledging
> successful receipt of that data

Which is not the case for Cap'n'Proto (admittedly it states it uses a hack).
And there's no discussion of end-to-end problems like BufferBloat, which are
very hard to solve by only looking at your own buffer
[https://en.wikipedia.org/wiki/Bufferbloat#Solutions_and_miti...](https://en.wikipedia.org/wiki/Bufferbloat#Solutions_and_mitigations)
— or even the semantics of "blocking on server's return value" (Is it enough
for the receiving process to have the message in memory? The type system
showcased seems to tell that story)

The article also doesn't state how a simple RPC call works. Going to
[https://capnproto.org/rpc.html](https://capnproto.org/rpc.html) immediately
puts me off by inventing "time travel", calling it "promise pipelining" and
showing an impossible trace diagram (you can't have messages go backwards in
time).

But when explaining it, it's really RPC message coalescing and compile-time
reference indirection, from the promise to the underlying object instance as
it is after executing the coalesced message pipeline. However, even when using
the example of a file system (which is about as exception-intense as you can
imagine), exceptions are ignored.

Looking through the Calculator example
([https://github.com/capnproto/capnproto/blob/master/c++/sampl...](https://github.com/capnproto/capnproto/blob/master/c++/samples/calculator-
client.c++)) it turns out they haven't actually performed the compile-time
indirection, but actually block the calling thread like any random do-it-
yourself-RPC framework out there.

What a RPC framework should do is give you:

\- an extremely clear serialisation model that is outside of the framework

\- a clear API

\- clear garantuees / invariants on how it manages the complexities of network
programming

In short: it must be very clear in what it promises. Cap'n'proto is not.

~~~
kentonv
Wow ok... lots of incorrect assumptions here. Just for fun let's address some
of them.

> but that has a name, it's called polling, not streaming.

It's not polling. The idea is that the callback is called multiple times to
send all the chunks, for one invocation of `streamingCall()`. Sorry if that
wasn't clear.

BTW, Promise Pipelining is only involved in the client -> server streaming
example.

> \- in the same paragraph an "object-capability model" is introduced as a
> concept, but not explained

> \- `sendChunk @0 (chunk :Data)` doesn't make sense to a beginner without
> explaination; what's `@0` and why do I care?

> \- second paragraph, vocab: what's a "temporary RPC object"? Contrary to the
> precise, albeit unexplained vocabulary, in the first pragraph, this is
> vague.

This is a news post about a new release of an existing tool. You're expected
to be familiar with the tool already, or if you are not, you can go read the
rest of the web site to learn about it.

> \- second paragraph: "think of this like providing a callback function in an
> object-oriented language", when it should be "in a functional programming
> language" (callbacks aren't OOP, they are per definition functional
> programming)

As others have pointed out, you are taking a very superficial and literalist
definition of OOP and FP. That said, I should have said "callback object",
because that's what the example actually illustrates.

> is this service on the server side, or the client side?

Cap'n Proto is a peer-to-peer protocol, not a client-server protocol. Either
side can export interfaces and either side can initiate calls.

> \- Why name a message, `Data` when it's clearly NOT a chunk of data in
> layer-3/layer-4, but rather a layer-7 artifact with retries and checksumming
> implemented to ensure complete and accurate message delivery?

`Data` is a basic data type in Cap'n Proto. It means an array of bytes. You
can use any other type here if you want, it's just an example.

> > because the protocol can only achieve optimum throughput if a sender sends
> a sufficiently large quantity of data before being required to stop and wait
> until a confirming message is received from the receiver, acknowledging
> successful receipt of that data

> Which is not the case for Cap'n'Proto

What is "not the case"? The whole point of this streaming feature is to do
exactly what's described in your Wikipedia quote.

> And there's no discussion of end-to-end problems like BufferBloat

BufferBloat is mentioned several times in the post (though I called it
"queuing latency", describing the symptom rather than the cause).

> Going to [https://capnproto.org/rpc.html](https://capnproto.org/rpc.html)
> immediately puts me off by inventing "time travel", calling it "promise
> pipelining" and showing an impossible trace diagram (you can't have messages
> go backwards in time).

"Time travel" and "infinitely faster" are obviously tongue-in-cheek claims.

> However, even when using the example of a file system (which is about as
> exception-intense as you can imagine), exceptions are ignored.

They are not ignored. Exceptions propagate to dependent calls. So if you send
a chain of pipelined calls and the first call throws, all the later calls
resolve by throwing the same exception. Eventually the caller waits for
something and discovers the exception.

> it turns out they haven't actually performed the compile-time indirection,
> but actually block the calling thread

No, the calling thread does not block.

> like any random do-it-yourself-RPC framework out there.

Haha yeah that's me, just some amateur that knows nothing about network
protocols...

~~~
xyz-x
> it turns out they haven't actually performed the compile-time indirection

Here's how much monadic control flow he actually has in Cap'n Proto:
[https://github.com/capnproto/capnproto/blob/77f20b4652e51b5a...](https://github.com/capnproto/capnproto/blob/77f20b4652e51b5a7ebda414e979e059a6c7c27c/c++/src/capnp/rpc.capnp#L1081-L1090)
— nothing except fields. Gut feeling was correct then.

[https://news.ycombinator.com/item?id=22972728](https://news.ycombinator.com/item?id=22972728)

~~~
kentonv
No, that part of the protocol defines mobile code -- code which an RPC caller
can ask the remote callee to execute directly on the remote machine. It's
intentionally limited because Cap'n Proto is not trying to be a general-
purpose code interpreter. Most RPC systems don't have this at all.

KJ Promises -- the underlying async framework that Cap'n Proto's C++
implementation is built on -- let you write arbitrary code using monadic
control flow. But that arbitrary code executes on your own machine, not the
remote machine.

~~~
xyz-x
It doesn't have to have a turing complete interpreter; it just have to be
provably terminating, and you can build most use-cases as an active message.

What I'm after with the monadic control flow are the error-cases; let's say
you have

> music.getPlaylist(ps => ps.userId ==
> "u123").findTopSongs(10).enqueue(qInstance) => Result<C, Error>

it would be nice to see how this would be interpreted into an AST and executed
as an active message on the server (receiver).

That said, I brought it up because the copy alluded to it. It's a great time
sink to build an interpreter, even if it's only acting on a unit of work whose
variant is strictly decreasing, just look at Linq-to-SQL and IQbservable<T> a
decade ago.

> KJ Promises

Side note: another copy that greatly frustrates me, as I now try to find the
docs on the above async stuff:

> Essentially, in our quest to avoid latency, we’ve resorted to using a
> singleton-ish design, and singletons are evil [linked to a page that crashes
> for HTTPS-everywhere users (me) with PR_END_OF_FILE_ERROR].

(Besides the broken link,) singletons are not always evil. I know you know
this, I know the copy is tounge-in-cheek again and being ironic — BUT POE's
LAW FOR CRYING OUT LOUD :D
[https://en.wikipedia.org/wiki/Poe%27s_law](https://en.wikipedia.org/wiki/Poe%27s_law)
— "Poe's law is an adage of Internet culture stating that, without a clear
indicator of the author's intent, it is impossible to create a parody of
extreme views so obviously exaggerated that it cannot be mistaken by some
readers for a sincere expression of the views being parodied"

...so KJ Promises; I can't find that mentioned. I only find Promise
Pipelining; but that must be your Op:s that allow for field traversal? The
site has this copy:

> [RPC Page] With pipelining, our 4-step example can be automatically reduced
> to a single round trip with no need to change our interface at all.

But I must be from another planet, because I really don't understand:

\- first you have a pretty decent design of files that mimic local files

\- now you instead showcase what rich messages look like, calling that a
"[message?] singleton", linking to a broken site

\- path string manipulation exists in every standard lib, it's not something
we implement

\- if someone wants to perform multiple ops on a file - let's say read a chunk
of it * only Data needs to be resused for there to be no copies (contrary to
the copy), but that's also a problem in the first decoupled example

    
    
      * often, almost always, when I read about what an RPC system can do, I'm not in the memory-management mind-set, so I don't care about re-allocating resources
    
      * caching is not a relevant solution, it's to the contrary; completely irrelevant in this context and it's detracting from understanding what you want me to understand
    
      * caches aren't error-prone when used right, like with immutable data, or read-through caches as transparent proxies can do, but all of this is beside the point
    

\- then there's a discussion about "giving a Filesystem" to someone, when it's
really all in my program

    
    
      * hard-coding a path is out of scope; that's about engineering process, not about the software. You ask "But what if they [have] hard-coded some path", I answer "yes, so what?"
    
      * what if we don't trust them (our own code?) — no, it's not an AuthZ decision locally, it's remotely, so you want to be explicit about the error cases here, but there's nothing about it — instead the copy says "now we have to implement [authN authZ systems]" — but again, this has nothing to do with merging small interfaces into a larger interface; it's a problem even with the small interfaces
    

\- the section ends with the broken link and then the next section states
"Promise Pipelining solves all of this!" — but noo, there's so much mentioned
above, and the premise is unclear and I have no idea what exactly promise
pipelining solves!

And then you go with the calculator example; but the file is large and I still
don't know what "Promise Pipelining" means, where to look. I see a lot of
construction of values going on and then a blocking wait (polling of the
event-loop, but that's also not the point). There are so many bugs in that
copy, that it's really hard to know where to start detangling it! With that
copy, I would never in my life touch the underlying code! It should be fixed!
(sorry, I'm getting into a state here, but that copy... wow)

And this is the kicker that seals the deal:

> Didn’t CORBA prove

Ehm, WTF? Why not contrast with gRPC? But also, why not clarify the above
first so I can use that understanding myself? If I'm _not_ a newbie at this,
why do you mention CORBA? Do I look like the kind of person that would ask
that question? It's demeaning to the reader.

And you mention object capabilities; that's a HUGE area of research, of which
I've worked with no live system using it. But here it's casually mentioned,
like building such as system is a walk in the park without explaining how.

\---

So here I am after another frustrated 40 minutes on the site, and I still
haven't found the docs on KJ Promises.

