
B-threads: programming in a way that allows for easier changes - sktrdie
https://medium.com/@lmatteis/b-threads-programming-in-a-way-that-allows-for-easier-changes-5d95b9fb6928
======
jonstaab
Just to add some contrast to the mostly negative comments here (which have
merit), this is interesting to me, not because it aims to hide the past, but
because it makes time a first-class concept in the software development model,
which most programming styles fail to do (e.g. most RDBMS frameworks add
migrations on as an afterthought). I like this, and hope something like it
catches on.

The problems with this approach seem solvable to me, albeit with more
experimental magic that could explode:

\- The resulting big ball of mud (and subsequent performance problems of a
long pipeline of relations) could be compiled away, resulting in a single
artifact. That is, you'd develop in append-only style, but when you "commit"
your release, you'd end up with a single, optimized artifact for deployment,
which would also be readable. This seems really nice to me, since your
changes, while being based on the behavior of the system, would create a diff
in the implementation of the system, potentially reaching way back into
upstream events (in the example, the behaviors that block hot water and
substitute cold water would just completely eliminate the first pane and
simplify to adding cold water). This would let you see your changes from
multiple perspectives. This approach also seems really friendly to fuzz-
testing, which would give you a third look into the behavior of the system,
and you could write tests based on the final state of the system after a
number of given events.

\- Migrating data structures actually seems easier to me for an event sourced
approach, since you'd just re-project your domain models based on the new flow
of events. b-threads would allow you to re-compile your event stream just like
you re-compiled the source artifact (having parameterized events remains a
problem, since your historical data could end up being incomplete and invalid
based on new policy, you'd have to adopt a permissive schema to keep stuff
that validation would otherwise reject).

I'll agree that b-trees don't really solve anything, but they do bring up some
interesting questions that I think are worth asking. Datomic and Darklang I
think are much more practical, and seem to dabble in the same sort of areas.

~~~
adamc
Even if you can compile away the performance problems, how do you escape the
analytical mess that is left being? Patching software the old-fashioned way is
expensive because we have to integrate the changes into the model --
essentially, rethink parts of the software. But the result is certainly likely
to be easier to understand down the line. In a layered, "historical" model, I
now have to understand _the entire history_ , and correctly deduce how the
history has changed the functioning of the original model. That strikes me as
horrible.

~~~
btown
Is this not a tooling problem? There are two compiler targets: the runtime,
and the engineer. A system like this is sustainable iff it can compile a
subset of the runtime-necessary information into a readable, interactive,
sequential form.

And this very much exists in the real world: every moddable video game, every
audio/video tool accepting plugins, every multi-team business workflow, every
browser plugin architecture, every SQL trigger... all are collections of
independently developed state machines, intercepting a global event list and
interrupting with their own events, just waiting to conflict with each other.
Plugins and (blocking/yielding) extension points are the only real way to
build software with massive feature surface areas.

The tooling to visualize and debug these flows is IMO quite lacking, and I
don’t think this is helped by systems engineers’ general love for all things
textual. One needs to not only understand the flows they think will happen,
but also fuzz the ones they don’t, and all this needs to be presented in a way
that doesn’t overwhelm human working memory. I don’t think that’s a solved
problem by any stretch. And I doubt the solution will look like our current
text editors. But it’s something I think about quite often.

------
daenz
I'm going to make a conscious effort not to come off sounding like an asshole,
but please excuse me if I slip up. I find many of the ideas in post to be
fundamentally at odds with the direction that "good software development"
should be travelling. The core of my feeling is best captured by the following
quote from the post:

>As a system grows in complexity we don’t necessarily care about how old
b-threads have been written, hence we don’t care about maintaining them.

This post is essentially formalizing the process of creating a Big Ball of
Mud[0] that is so complex and convoluted that it is impossible to understand.
The motivation for formalizing this process seems sane and with good
intentions: to add functionality quickly to code you don't really understand.
Normally, doing something like is considered cutting corners and incurring
explicit technical debt, and must be used sparingly and responsibly. However,
the process of "append-only development" is embracing the corner cutting and
technical debt as a legitimate development process. I can't get on board with
this.

To be more specific, with an example (and maybe I am wrong in understanding
the post, this would be the time to point that out to me), let's suppose you
have a massive complex software system that was built over the years with this
"append-only" style of development. One day you find a nasty bug in one of the
lower layers, and to correct it, you have to change some functionality, which
moves/removes some events that subsequent layers are depending on for their
own functionality. Suddenly, you are faced with rewriting all of those layers
in order to adapt to your bugfix. What you're left with is a nightmare of
changes that disturb many layers of functionality, because they're all based
on this append-only diffing concept: the next layer is dependent on the
functionality of the previous layer.

This is what programming APIs are for: to change functionality in lower layers
with minimal influence subsequent layers. This post and process seems to be
imagining a world without APIs.

0\. [http://www.laputan.org/mud/](http://www.laputan.org/mud/)

~~~
plutonorm
"Good software development"... in 15 years of programming at 7 different
companies I've never seen a good manageable piece of software. Our current
paradigms do not work. I for one welcome anything that offers an improvement
on the current clusterfuck.

~~~
tluyben2
Many more years, many more companies; there are good examples, but they are
not 'in companies' (I am thinking Redis, SQLite etc). Especially 'fortune
1000' (local or global) have the most terrible software imaginable in my
experience. Yet it works and, well, they belong to the fortune 1000 so
apparently it is not _that_ bad. But it is very badly written software and I
agree, we must explore better ways of writing software. I just do not think
this is one of them. It cannot hurt to explore though.

~~~
jdmichal
This is really due to software engineering as a cost center. When viewed this
way, the business always attempts to drive cost to rock-bottom, which then
means software that only incurs technical debt. Because business will _never_
pay to reduce the debt, only to get new features. And yes, this is totally
false dichotomy, because eventually that debt means that all those future
features are more costly. But something about boiling frogs...

The only alternative I've seen without completely rethinking company
structure, is having engineering management rebuking business and pushing for
these initiatives. Which can be inadvisable from a career perspective, so
usually does not happen. It's more politically savvy to push for a "new
project" that will fix all the issues of the existent systems.

------
msteffen
In a prior team, we accomplished the goal of "a newcomer should be able to
figure out the history of the code they're looking at, and thereby understand
why it is this way" in a simpler, lower-tech way:

1\. One commit per PR (so the commit history wasn't polluted with rough drafts
and debug logs and such) 2\. Every PR links to a bug 3\. Design discussions
happen in bugs

Then when you see some weird code, you can "blame" back to the commit, and
from there look at the bug to see why the commit was needed. We also really
encouraged people to separate refactoring from new features (refactor first,
in a descriptive PR, then add the feature), so that you avoided the problem of
seeing a complex refactoring in a PR called "add foo"

I agree with other commenters that encoding the design history of your project
in its operation doesn't seem helpful, and in my opinion that's because it's
solving the wrong problem. Finding the places in a complicated project that
need to change is only hard because knowing, to a fairly complete degree, how
the system already works is hard, and knowing how a system already works is
mostly the product of understanding the problem that it solves deeply.
Experienced engineers on a project usually have no problem figuring out which
parts of a system need to be changed, because they already know the system
well, and they don't know they system well by virtue of having all the code
memorized, which would be impossible, but by having participated in all of the
design discussions.

Therefore, the problem that needs to be solved is making the learning process
as fast and easy as possible—thus the strict process around documenting
changes and their motivation. Allowing people to quickly discover the existing
code's motivation gives them the information that will actually need be
committed to memory.

~~~
0xdeadbeefbabe
Is all your code in one repo? Just curious, because I see this learning
problem made worse by multiple repos with multiple PRs.

~~~
msteffen
It was all in one repo on that team, yes.

Offhand, it seems like linking all PRs in all repos back to the bug, and
linking the bug to all of the disparate PRs might help. A version (possibly)
of the "many PRs" problem that we sort of had was that it often took many PRs
to close a bug. Our goal wasn't to minimize the number of PRs, and in fact we
encouraged small PRs. Rather, the goal was to get engineers past the commit
and PR to the underlying motivation and design.

------
yowlingcat
Nothing about this approach seems easier to reason about nor maintain over the
long term. Transactional integrity, migrating data structures, and modifying
business logic/control flow are key building blocks required for many
production applications, certainly line of business ones.

I'm reminded of the golden hammer fascination with event sourcing, and I
continue to see event sourcing applied inappropriately. Is it useful for some
specialized applications? Yes, and it's a great tool there! But should it be
the default tool you reach for to solve most problems over a relational DB and
your most trusted, high level, stable programming language with a mature
library ecosystem?

No to that question, and no to this.

------
akkartik
Interestingly, my article from a few years ago has a very similar motivation
and initial figure. And it too ends up at append-only programming. But it does
so in a _very_ different way: [http://akkartik.name/post/wart-
layers](http://akkartik.name/post/wart-layers)

(I still program all my side projects in this way.)

------
kadendogthing
Anyone who's had to analyze complex systems at a high level will recognize
that this is a pattern that often emerges in some form. The over all pattern
happens in game engines, plugin systems, UI frameworks, in service buses, all
over the place. It's just now at the code level. It's a good observation,
though it seems the author has arrived at the correct conclusion/analysis and
tried to shoe horn it in to fundamentally lacking systems (i.e., other
languages/stacks that are inherently not functional, have side effects, and do
not work off "events"/sync points).

My thoughts on this are: append only programming is less effectual than having
an over all append only system, where many programs take input (including the
original source "message"), then hand off the out put to the next program in
line. Which programs get run depends entirely on that run context's
configuration. When you have disparate programs working towards a final goal,
and these programs are defined in a readable configuration it gives you quite
a few benefits. Good clean logging structures, easy to reason about, easy to
change (feature toggling, data migration points, etc), and easy to clean up
when required.

I call it unix'ing your systems.

There's a lot of comments calling this silly but please attempt to give it a
few read overs and apply it to systems you've worked with. Hopefully they've
been large enough to draw similarities to what he's talking about in a very
clean, academic manner.

------
asperous
Video game engines often have plugin systems like this that listen for hooks
or events and can act on them. It does make extremely complex things a bit
more simple.

But the chart in graph 1 still applies, often when adding new functionality
you realize you need to add or remove modularity and extensions.

~~~
Chris2048
Eclipse has lots of plugins, but that turned out not so great as too many is
known to cause issues.

------
reason-mr
Author _really_ needs to read up on the actor model :
[https://en.wikipedia.org/wiki/Actor_model](https://en.wikipedia.org/wiki/Actor_model)

~~~
Chris2048
Could you expand on your meaning, and relate B-threads to Actors, instead of
dumping a massive article? TLDR of why time is well spend reading it.

------
dan-robertson
I think the issue with the argument of the article (just look at the event
stream and write programs to tweak it into the stream you want) is threefold:

1\. You can’t see the blocking in the event stream behaviour so your change
might mysteriously fail to work. I wasn’t super clear on what the semantics
were but I think the last change to not show ads to enterprise customers
accidentally breaks the normal case and causes the program to lock up once it
hits isValid. Normal program changes also suffer from accidentally breaking
existing behaviour. Especially eg a change to a superclass breaking subclasses
or accidentally mutating some global state or throwing an exception in the
wrong place.

2\. There are lots of possible event streams and it is hard to predict how
well your change will behave under all of them. This is also a problem for any
sufficiently large and modified program.

3\. You can’t block a block, so it is hard to undo modified behaviour in some
cases (this is what causes the atm to lock up after the last change (if I’m
right in thinking it does))

However I don’t think this means that this is a pointless area of research. It
may be that good ways of dealing with these things are eventually devised. And
the idea of lots of small processes doing simple things coming together to
make something which holistically behaves in a smart reliable and resilient
way seems popular and reasonable (eg see copycat).

One good thing about this method would be testability. A test can be just
another bthread (or more) running in the system followed by printing a trace
of the events. That way one might easily know if some existing behaviour is
broken because the printed trace would change.

Another similar idea (without necessarily requiring append-only programming)
is one from eve and a talk I don’t remember the name of where one writes
prolog-like rules to manipulate a set of known facts over time. This also
suffers from difficulties with negation.

I suppose a general idea is that with these “bag of interacting rules” systems
(which seem a good start for building complex systems which can be modified
and tweaked reliably and resiliently) it is hard to have negatives (thus
blocking) and in particular very hard to have reliable double negatives
(blocking a block).

So what is the solution? Well I don’t know and it seems that any answer could
be unsatisfactory. One possibility could be to allow for more probabilistic or
otherwise weighted behaviour but this seems bad for reliability of results.
Another idea is to disallow negation but this seems to make everything hard.
Another idea might be to somehow make synchronisation points better.

~~~
btown
I think one key is that in the real world (where these things are
plugins/actors) there is there is a distinction between functional and actual
immutability. If you need to block a block or undo a change, write a PR to the
relatively-frozen codebase of the relevant b-thread/actor, that makes its
behavior conditional to a never-before-seen event, but default to old behavior
- that way it is verifiably not going to introduce behavior changes on its
own, but it makes itself amenable to new extension points.

One could even formalize this system: a piece of code can be replaced if its
behavior under its current set of events is guaranteed not to change.

Also, weighted/ordered behavior is sometimes necessary due to race conditions:
two identical threads respond to events in different ways, which one wins?
When I have made systems with plugin-like architecture, I always establish a
global “pecking order” of sources for consistency, but anything that relies on
the pecking order is a massive code smell. (That’s a rule I wish I could tell
my younger self to follow more often!)

I’ve spoken in another comment about testability:
[https://news.ycombinator.com/item?id=20562272](https://news.ycombinator.com/item?id=20562272)

I love these kinds of magical discussions :)

------
hzhou321
It starts with an interesting description of problem, but I felt that it took
a sharp slope and reached into a bizzare conclusion.

~~~
hinkley
We are still wrestling mightily with the notion that solving a problem badly
can be worse than not solving it at all.

------
crimsonalucard
The solution to incremental changes is to develop your app in a way where
every unit of computation is compose-able under a strict rule. Meaning I can
write a universal function compose(A,B) = C and that function can compose any
primitive. Then you build complicated logic just like how you build a wall
from a set of bricks... form complicated logic as a composition of more
primitive logic all the way down.

If your program is constructed this way at every layer of logic, not only will
your program be amenable to append only styles of programming. But it will be
amenable to decomposition. Often you find that your primitives are too big and
you need to split your primitives... well if your primitives are itself made
out of the compositions of lower level primitives then breaking apart that
function is trivial.

There is only one primitive in all of programming that follows this
composition rule. Procedures cannot compose, Objects cannot compose. What's
left?

~~~
yellowapple
> if your primitives are itself made out of the compositions of lower level
> primitives

I'd argue those ain't "primitives" in that case; what differentiates a
"primitive" from some other kind of computation is that it can't be broken
down further.

> Procedures cannot compose

Depends on how you define "procedure". In the sense used in procedural
programming, you certainly _can_ compose them (i.e. by jumping into a
subroutine that in turn jumps into other subroutines, pushing and popping
things to/from the stack in the process). This is a pretty fundamental concept
for threaded interpreters, and Forth in particular (as well as other stack-
driven concatenative languages descended from it) exemplifies this as the very
basis of the language itself.

> Objects cannot compose

Object composition (both by including objects within other objects and by
using interfaces and implementations thereof) has been a thing for multiple
decades now.

~~~
crimsonalucard
> I'd argue those ain't "primitives" in that case; what differentiates a
> "primitive" from some other kind of computation is that it can't be broken
> down further.

Usually when you program you don't create a library made out of the lowest
level primitives. You start out with higher level primitives and hopefully if
your design is correct, all upper layers are different compositions of this
lowest level set of primitives.

I am talking about this layer of primitives. If you chose your primitives
incorrectly and find out that you need to break apart your primitives, it is
far easier to do this if your primitives themselves were also made out of
compositions of lower level primitives. If your primitives were already a
tangle of objects and procedures this would be very hard.

>Depends on how you define "procedure". In the sense used in procedural
programming, you certainly can compose them (i.e. by jumping into a subroutine
that in turn jumps into other subroutines, pushing and popping things to/from
the stack in the process). This is a pretty fundamental concept for threaded
interpreters, and Forth in particular (as well as other stack-driven
concatenative languages descended from it) exemplifies this as the very basis
of the language itself.

I defined what I mean by composition. By procedure I mean a list of procedures
of instructions. What does it mean to compose steps One through five with
steps six through ten? Is steps six through ten compose-able with another set
of steps from a whole different sub routine? Most likely no. You cannot define
a singular function compose(step1, step6) = step4 like I described in my
definition.

>Object composition (both by including objects within other objects and by
using interfaces and implementations thereof) has been a thing for multiple
decades now.

Does this fit with what I defined as composition? No it does not.

Object composition is a horrible word. It is not true composition. Dependency
injection is a more fitting word for what is actually happening. You are
making one object depend on another object, you are not composing two objects
to form a new object. When objects "compose" you create dependencies and
custom glue code to make everything fit together.

When I refer to composition I am talking about how bricks compose to form a
wall. There is only a singular form of composition to compose bricks into a
wall and each brick can exist without a dependency to another brick.

"Object composition" is using as much concrete as possible to compose bricks
of every geometric shape into a mishmash ball of solids.

Again objects and procedures do not compose like bricks. Another programming
primitive is more suited for this. Maybe someone knows what this primitive is.

~~~
yellowapple
> You start out with higher level primitives

Those aren't _primitives_ , though. Primitives are the lowest possible level;
that's what makes them primitives. If something can be decomposed further (or
is in turn composed of other things which are visible to the language without
e.g. dropping down to assembly), then it is definitionally _not a primitive_.

I think the word you seek is "components".

> By procedure I mean a list of procedures of instructions.

That's a circular definition, and does not in any way clarify what you mean by
"procedures".

> What does it mean to compose steps One through five with steps six through
> ten?

In the context of procedural programming, it would ordinarily mean to perform
those steps in sequence (that is: the output of steps one through five would
be the input to steps six through ten), or to define a procedure which in turn
calls those procedures in sequence.

> You cannot define a singular function compose(step1, step6) = step4 like I
> described in my definition.

In pseudo-Forth (with punctuation turned into something more reasonable, and a
couple imaginary procedures for creating (anonProcedure) and executing
(execute) function pointers):

    
    
        procedure stepOne anonProcedure finely chop end end
        procedure stepSix anonProcedure pan add stir end end
        
        \ [pop] is imaginary and purely illustrative of the idea of
        \ taking function pointers off the stack.
        procedure compose anonProcedure [pop] [pop] end end
        procedure stepFour stepSix stepOne compose end
        chicken stepFour execute  \ equivalent to: chicken finely chop pan add stir
    

Looks pretty composed to me.

> you are not composing two objects to form a new object
    
    
        class Foo {
          greeting = "Howdy"
          hi = { puts $greeting }
        }
        
        class Bar {
          farewell = "Adios"
          bye = { puts $farewell }
        }
        
        class Baz {
          greeter = new Foo
          dismisser = new Bar
          hi = greeter.hi
          bye = dismisser.bye
          speak = { hi; bye }
        }
        
        speaker = new Baz
        speaker.hi   # "Howdy\n"
        speaker.bye  # "Adios\n"
        speaker.speak # "Howdy\nAdios\n"
    

Tada!

> When I refer to composition I am talking about how bricks compose to form a
> wall.

You're talking about doing it in a way that assumes you're only allowed to use
bricks on their own, piled high with noting actually connecting them together.

Procedures and objects work great as bricks. You just need some mortar and
rebar.

> Maybe someone knows what this primitive is.

Maybe you'd do a better job conveying your point if you just said the
"primitive" you mean instead of being needlessly coy about it.

And regarding this brick-which-cannot-be-named-for-some-reason: yes, they
happen to snap together remarkably well like Lego bricks, but people don't
(typically) build their houses with Lego bricks, nor do people normally write
their business logic in Haskell.

~~~
crimsonalucard
>Tada!

Looks like you defined a new object. Then placed two objects into the the new
object as dependencies. Sure You can call it composition. But that's not the
composition that I'm talking about. I specifically defined what I'm talking
about in the first post so that it could be referred to rather than everyone
needlessly going in circles talking about different definitions of
composition. Here let me copy and paste it here for you as an example:

"The solution to incremental changes is to develop your app in a way where
every unit of computation is compose-able under a strict rule. Meaning I can
write a universal function compose(A,B) = C and that function can compose any
primitive. Then you build complicated logic just like how you build a wall
from a set of bricks... form complicated logic as a composition of more
primitive logic all the way down."

Tada!

> Those aren't primitives, though. Primitives are the lowest possible level;
> that's what makes them primitives. If something can be decomposed further
> (or is in turn composed of other things which are visible to the language
> without e.g. dropping down to assembly), then it is definitionally not a
> primitive.

The lowest level primitive, you would think is a bit. But we hardly go to that
level do we? We actually have higher level things that we refer to as
primitives (for example ints). You know what's even lower than a bit? An
electrical voltage... I can go to even lower levels of abstraction as well.
Atoms. Yet here we are still calling a bit a primitive even though it is not
the lowest level.

When you program you design systems. And in those systems you define
primitives that function in YOUR universe. Sure you design the system on top
of another system which in itself has lower level primitives but we don't have
refer to those outside primitives within our own universe. So what I mean is
primitives not in the sense of the system I am working in, but in the system I
am creating.

That being said. PRIMITIVE is 100% the appropriate word. You never defined
what you mean by component but I'm sure it's some super specific thing that
comes from some design pattern, but that's besides the point.

>That's a circular definition, and does not in any way clarify what you mean
by "procedures".

You're referring to a typo. Replace "of" with "or" the point was to illustrate
a potential refactoring. Ex: A list of procedures is the same as a list of
instructions.

You get what I mean by procedures so no need to act confused about my typo or
a slightly confusingly worded sentence.

>In pseudo-Forth (with punctuation turned into something more reasonable, and
a couple imaginary procedures for creating (anonProcedure) and executing
(execute) function pointers):

We can get really technical here. Due to Curry howard isomorphism a turing
machine can technically do anything and simulate composition as well. But like
your example shows, it is awkward. It's adding a char to an int in C++. Yeah
you can do it.

Additionally you are arbitrarily naming your procedures. Step 6 should be
something that is very distant in meaning from step 1. For example:

Step 1 should be go to store. Step 2 should be buy chicken. Steps 3-5 should
be all the steps to prepare the chicken. Step 6 is eat chicken

Compose step 1 with step 6. It does not make semantic sense.

Go to store, eat chicken.

What chicken?

Sure you can "compose" but like I said, you've basically gave an example where
you can do anything. You need rules on what can compose with what. Usually
these rules are implemented by something called a type system. Type systems
are typically ineffective at enforcing composition of procedures. There would
really be no way for a type system in your example to make sure step 1 can
only compose with step 2 to form stepOneAndTwo and such a composition is not
universally defined. You define how your steps compose on every program you
write. No legos, no modularity.

>Maybe you'd do a better job conveying your point if you just said the
"primitive" you mean instead of being needlessly coy about it.

I'm not being coy. What's going on here is an inability to understand. Or
maybe you do understand. In that case I don't know what's going on.

>And regarding this brick-which-cannot-be-named-for-some-reason: yes, they
happen to snap together remarkably well like Lego bricks, but people don't
(typically) build their houses with Lego bricks, nor do people normally write
their business logic in Haskell.

What does haskell have to do with this? I'm not talking about haskell. I am
talking about something 100% of all programmers already use all the time.

I'm not naming the brick because if you know what I'm talking about, then
there's really no point in arguing about it is there? You get it, because the
logic led inescapably to one place. You arrived at the same conclusion and
therefore we are in agreement.

If you don't get it, well that's not my problem, I'm not here to dive into a
rabbit hole about the merits of a certain topic, only the flaws of the current
paradigm. You want to talk about it? be my guest, I'm not starting it myself.

Who writes business logic in Forth nowadays? Less people than haskell that's
100% fact, but that's besides the point because, again, I never was talking
about haskell.

~~~
yellowapple
> But that's not the composition that I'm talking about.

Okay, then:

    
    
        composeObjects(first, second) = {
          third = new Object
          third._methods = first._methods.merge(second._methods)
          return third
        }
        
        class Foo { hi = { puts "howdy" } }
        class Bar { bye = { puts "adios" } }
        baz = composeObjects(new Foo, new Bar)
        baz.hi  # prints "howdy"
        baz.bye # prints "adios"
    

Tada!

This is only slightly simplified from languages like, say, Ruby or Common
Lisp, both of which (last I checked) support this sort of class-oriented
metaprogramming (or Perl, which exposes the ability to "bless" arbitrary data
structures as objects).

> But like your example shows, it is awkward.

In what way was that especially awkward? You take two pointers to already-
compiled procedures, you jump to / call those addresses one after the other,
and Tada! you've got literally all that's required to build one procedure
dynamically from two different ones.

> Go to store, eat chicken.

> What chicken?

That depends on your domain. In this particular case, it could be the first
chicken on the stack. It won't be a particularly tasty chicken, since it'll be
entirely uncooked and probably frozen, but that's certainly within the realm
of possibility.

> you've basically gave an example where you can do anything

Well yeah, welcome to Turing-completeness :)

> You need rules on what can compose with what.

Indeed, and the rules in that example would be:

1\. There are two things on the stack, and those things are addresses of
procedures to JMP or CALL into.

That's really all there is to it. If the arguments aren't valid pointers to
procedures, then you crash :)

(There are of course Forth derivatives/dialects/descendants that do type
checking, in which case you could verify that the two topmost items on the
stack are pointers to procedures, but that's not strictly necessary for
_composition_ ; just for making sure your composition procedure doesn't try to
compose things that aren't procedures)

> Who writes business logic in Forth nowadays?

I picked Forth (or more specifically: an imaginary derivative thereof) for the
procedure-composition example because it's simple to a fault, and because it's
able to compile procedures at runtime. You pop things from the stack, you push
things onto the stack. You pop two pointers, you store them in the necessary
code to CALL or JMP to those pointers, and you push a pointer to that
generated code. Tada! Composed.

You could do the same thing in Lisp, or in C/C++ (when linked to, say, LLVM,
or when using a compiler that allows dropping down to assembly), or assembly,
or in quite literally any other Turing-complete language with the ability to
emit arbitrary code and JMP to it (and even then).

Again: not all bricks are Legos. Nothing wrong with needing mortar and rebar
to build something.

~~~
crimsonalucard
>Tada!

composeObjects cannot compose correctly. The nature of objects is to know the
domain. Food is a composition of ingredients and cooking. However you can eat
food and take it to go, because during composition, the context of what food
is, is added. Your composition method cannot do this, it does not know about
context. A tuple (cooking, ingredients) has no domain knowledge. Your
composition is mashing random stuff together. It is not correct. It is the
same as creating a tuple or list of two objects, with no concept of Food.

>That depends on your domain. In this particular case, it could be the first
chicken on the stack. It won't be a particularly tasty chicken, since it'll be
entirely uncooked and probably frozen, but that's certainly within the realm
of possibility.

No it's not. Domain matters, otherwise programs are meaningless. Composition
without domain proves nothing.

>Well yeah, welcome to Turing-completeness :)

Why are you welcoming me to something you don't understand. Everyone knows
what turing completeness is, therefore everyone knows that you can imitate
features of one primitive with features from another if the underlying system
is turing complete. So what is the point of your whole expose? Let me answer
for you: there is no point. When someone says that C++ supports types and
python doesn't we have a guy here who says "well python does support types:
look I wrote a whole C++ compiler with type checking in python...." You are
that guy. Technically correct, but Literally going nowhere with all your
points.

>Indeed, and the rules in that example would be:

>1\. There are two things on the stack, and those things are addresses of
procedures to JMP or CALL into.

>That's really all there is to it. If the arguments aren't valid pointers to
procedures, then you crash :)

No dude. By rules I mean domain specific rules. Not arbitrary rules of the
language. The composition of step1 and step6 is illegal because it doesn't
make sense even though you can do it in forth.

>You could do the same thing in Lisp, or in C/C++ (when linked to, say, LLVM,
or when using a compiler that allows dropping down to assembly), or assembly,
or in quite literally any other Turing-complete language with the ability to
emit arbitrary code and JMP to it (and even then).

Again you can do anything with any language therefore all languages are
isomorphic and there's no differences among any languages. Dude. Come on.
Objects and procedures do not compose, unless you bend them to imitate
functions. You can do this in C++, lisp anything, including Forth. This is an
obvious fact. Therefore nobody brings it up. Except you.

>Again: not all bricks are Legos. Nothing wrong with needing mortar and rebar
to build something.

Buildings aren't programs built to be refactored and changed on a very dynamic
basis. If you want your program to be scalable (which buildings aren't)
modular (which buildings aren't) refactorable (which buildings aren't) then
you need to build with legos. Mortar limits scalability which is a feature 90%
of programs want but fail to have.

If you want your program to be glued together like a giant monolith, be my
guest, use the mortar.

~~~
yellowapple
Context is not a requirement of the actual act of composition, though; that's
what I'm trying to help you understand. Context might be _why_ you might
compose two things in a specific way, or _why_ you might restrict composition
between given things, but the act of composition itself is not tied to those
specific requirements. In the functional realm: passing a function and list
into a map function should work for any function and any list; map needn't
care whether or not that function actually is applicable to that list (if it
ain't, then that's the function's problem, not map's). Same deal for the
"composeObject" function (if Foo and Bar shouldn't be composed, then don't try
to compose Foo and Bar), and same deal for the compose procedure (if step1 and
step6 shouldn't be composed, then don't try to compose step1 and step6).

> The composition of step1 and step6 is illegal because it doesn't make sense

It doesn't make sense _to you_. The computer doesn't care whether or not it
makes sense to you. The computer only cares that it's able to pop two
subroutine pointers and push one to a new subroutine that JMPs/CALLs to those
popped pointers. The computer's gonna try to go to the store and eat a
chicken, because that's what it's been told to do. To the computer, that makes
_perfect sense_ , because everything's subroutines and words, the chicken and
the store and the going and the eating just being data to be pushed and
popped.

You're welcome to write subroutines that actually validate your ideas of
sensibility before proceeding to actually compose the subroutines they're
popping off the stack, and there are indeed languages that help you with that,
but the computer is still fully capable of composing things regardless of
whether or not you think they should be composed.

\----

To humor your argument, though:

    
    
        composeFooAndBar(first: Class<Foo>, second: Class<Bar>) {
          third = new Class
          third._methods = first._methods.merge(second._methods)
          return third
        }
        
        FooBar = composeFooAndBar(Foo, Bar)
        foobar = new FooBar
        foobar.hi; foobar.bye  # "howdy" then "adios"
        FooBaz = composeFooAndBar(Foo, Baz)  # Yay!  Our sensibilities are enforced!

~~~
crimsonalucard
>Context is not a requirement of the actual act of composition, though; that's
what I'm trying to help you understand. Context might be why you might compose
two things in a specific way, or why you might restrict composition between
given things, but the act of composition itself is not tied to those specific
requirements. In the functional realm: passing a function and list into a map
function should work for any function and any list; map needn't care whether
or not that function actually is applicable to that list (if it ain't, then
that's the function's problem, not map's). Same deal for the "composeObject"
function (if Foo and Bar shouldn't be composed, then don't try to compose Foo
and Bar), and same deal for the compose procedure (if step1 and step6
shouldn't be composed, then don't try to compose step1 and step6).

Context matters. This is what the type system is for, the type system makes it
so the computer will also know what can compose with what. You claimed the
computer doesn't care, the type system proves you wrong.

>but the computer is still fully capable of composing things regardless of
whether or not you think they should be composed.

Composition as a concept itself doesn't exist as a command on the instruction
set level. It's a higher level concept. Another higher level concept is
teaching the computer about the domain by feeding it into the type system.

Why are you talking about the low level inability of a computer to comprehend
the domain when on that same level the computer is unable to compose things as
well. The topic of conversation is about a higher level of logic. Types and
composition allow your computer to know about domain and compose.

>To humor your argument, though:

Remember way back when i defined composition as a universal function? Meaning
that the compose function should be able to compose ANYTHING. Not just a Foo
and Bar and your arbitrary composition rule. For example it should be able to
compose a ONE and a TWO into a THREE as well. That's my arbitrary composition
rule for addition.

Your example fails to prove anything nor humor anyone.

~~~
yellowapple
> This is what the type system is for, the type system makes it so the
> computer will also know what can compose with what.

But the computer doesn't _have_ to know that in order for you to be able to
compose things. You can compose things without a type system (or more
precisely: with a type system where the only actual type is a machine-width
word). You just have to take care to make sure the result makes sense in your
domain (which a type system helps automate for you).

That is: the type system is not the thing that lets you do composition. It's
just the thing that lets you constrain what can be composed.

In the pseudo-Forth example, you take two arbitrary procedures that pop
arguments and push results, and you get back an arbitrary procedure that pops
arguments and pushes results. Your ONE and TWO are now a THREE. They're fully
interchangeable _components_ (remember my suggestion of that word?). Whether
you _should_ interchange them is another story entirely, but you can if you
want to. That's composition.

> Remember way back when i defined composition as a universal function?
> Meaning that the compose function should be able to compose ANYTHING.

That's exactly what the _previous_ OO example did, but you complained that
being able to compose anything somehow doesn't count as composition because
it's not restrictive enough. Now you're complaining that _not_ being able to
compose anything somehow doesn't count as composition.

Let's dig into what you might mean by that, though - specifically with your
"arbitrary composition rule for addition". How would you go about implementing
add(Store, Chicken) (or Store + Chicken)? What would that mean? What would
your THREE actually be after adding your ONE and TWO?

The normal answer would be using a bit of that "mortar and rebar" to _tell_
the computer how to add a Chicken to a Store to get some other type (a
Popeyes, perhaps?), e.g. by defining methods to polymorphically perform that
addition, but it seems like you ain't exactly satisfied by that answer.
Unfortunately, the type system alone doesn't really help you much here; just
because the computer knows what a Chicken and a Store and a Popeyes is doesn't
mean it knows how to add a Chicken and a Store to get a Popeyes.

If you object to the "nonsensical" results of trying to add a Store to a
Chicken, then you can't get around having to define specific implementations
of that composition between a Store and a Chicken. Maybe you can cheat through
it a bit by subclassing them from a common ancestor or having them both
implement an interface and then add against that ancestor/interface, but that
just punts the problem.

~~~
crimsonalucard
>You can compose things without a type system

Compose an array of strings with an integer. What do you get? Run time error
OR type error. Please don't make up an arbitrary definition of composition as
a counter example. These things generally don't compose unless you make up
some definition on the spot.

>You just have to take care to make sure the result makes sense in your domain
(which a type system helps automate for you).

Thanks for retelling me what I've been telling you since the beginning.

> That's exactly what the previous OO example did, but you complained that
> being able to compose anything somehow doesn't count as composition because
> it's not restrictive enough. Now you're complaining that not being able to
> compose anything somehow doesn't count as composition.

You don't get it. It should compose anything under a single rule of
composition WITH CONTEXT. You are defining an arbitrary compose for a specific
pair of types that keeps context but this definition of composition will fail
with every other instance of any primitive outside of the Foo and Bar type.

>Let's dig into what you might mean by that, though - specifically with your
"arbitrary composition rule for addition". How would you go about implementing
add(Store, Chicken) (or Store + Chicken)? What would that mean? What would
your THREE actually be after adding your ONE and TWO?

You're asking me? LOL. I'm actually the one asking YOU. I'm telling you that
it's NOT POSSIBLE to compose things this way for objects and procedures.
you're claiming it is.

Let me state this as specifically as possible. Compose two THINGS under a
universal DEFINITION with correct CONTEXT is what I'm talking about. Meaning a
function that can compose cooking and ingredients into food can also compose
eggs and sugar into a cookie while maintaining knowledge about what a cookie
is, and what food is.

Additionally, I am SAYING such a compose function for objects or procedures as
parameters with OR without a type system CANNOT BE done.

>but it seems like you ain't exactly satisfied by that answer. Unfortunately,
the type system alone doesn't really help you much here; just because the
computer knows what a Chicken and a Store and a Popeyes is doesn't mean it
knows how to add a Chicken and a Store to get a Popeyes.

Yeah I'm not satisfied with your answers. I'm satisfied with A answer that you
don't currently know about. It is definitely possible to compose two
primitives with context and with no mortar or rebar under a universal compose
function. Just not with objects or procedures as primitives.

>If you object to the "nonsensical" results of trying to add a Store to a
Chicken, then you can't get around having to define specific implementations
of that composition between a Store and a Chicken. Maybe you can cheat through
it a bit by subclassing them from a common ancestor or having them both
implement an interface and then add against that ancestor/interface, but that
just punts the problem

The problem isn't adding these things together. It's impossible to add these
things together. It's getting your computer to know what can be composed and
what can't. Your computer should know that composing a chicken with a store
doesn't make sense. Either way you're starting to see the full picture because
it doesn't even make sense to compose two chickens without you making up an
arbitrary definition.

There is ONE primitive where you can do this type of composition. It also
relies on parametric polymorphism so in general you can't compose all these
primitives together, only primitives of relevant context. Primitives of
relevant contexts will compose to form higher order abstractions with contexts
that make sense under a SINGLE definition.

If you build your entire program using nothing but composing this primitive
then every atom of your program is legos. Modular, reusable, refactorable,
scalable. Powerful.

~~~
yellowapple
> You are defining an arbitrary compose for a specific pair of types that
> keeps context but this definition of composition will fail with every other
> instance of any primitive outside of the Foo and Bar type.

Well guess what? _That 's what's necessary to compose things._ That's what
your compiler or interpreter is doing behind the scenes. You cannot escape
this.

And yes, "things" includes functions (the "primitive" you seem so averse to
actually naming for some reason). It might surprise you to learn that
"functions" and "procedures" are literally the same thing as far as the
computer's concerned (at most, a "function" might be a procedure plus type
information, or it might be a procedure that does its own type checking, but
it's ultimately a procedure nonetheless). When you pass a "function" as an
argument, you're literally passing around a pointer to the code that needs
executed (whether already compiled or in some not-yet-compiled
representation). But as far as the computer knows, it's just yet another
machine-width integer; you would have to establish what it means to "compose"
two integers.

Composing functions is - when all is said and done - literally what my pseudo-
Forth example demonstrated, because functions are ultimately just procedures.
Forth literally has no type system (at least not by modern standards), and yet
it fulfills the literal _actual_ definition of function composition. But _no_
, for some reason that ain't good enough, probably because you seem to believe
that there's no middle ground between _clean_ composition (what a type system
_actually_ provides) and no composition at all.

Put differently:

> These things generally don't compose unless you make up some definition on
> the spot.

 _Nothing_ composes unless you make up some definition on the spot.
Composition is meaningless unless you define what "composed" means in a given
context. For functions, they're "composed" by calling one with the other's
output. For procedures, they're "composed" by calling them one after the
other. For objects, they're "composed" typically by either aggregation or
inheritance/delegation.

> If you build your entire program using nothing but composing this primitive

Not even the purest of functional languages can truly do that. Ultimately,
they rely on _actual_ primitives for data; it ain't functions _all_ the way
down. Even purely-mathematical functions need numbers to, um, func.

The closest you can get to that is... (drumroll...) Forth, which treats any
word it doesn't recognize as a call to an imaginary procedure that pushes the
value of that word onto the stack, thus trying its darndest to make everything
pretend to be a procedure (and thus composable as such). Unsurprisingly -
because functions are just fancy procedures - this concept has worked its way
into the functional programming realm through concatenative programming.

Put differently: the "power" you feel when clicking together functions like
Lego bricks is possible _specifically because_ of procedural composition; the
former is just syntax sugar around the latter.

~~~
crimsonalucard
>Well guess what? That's what's necessary to compose things. That's what your
compiler or interpreter is doing behind the scenes. You cannot escape this.

How MANY times have I said that there is a definition of compose that is
universal and not arbitrary and not specific to certain contexts. You can
escape this.

> It might surprise you to learn that "functions" and "procedures" are
> literally the same thing as far as the computer's concerned (at most, a
> "function" might be a procedure plus type information, or it might be a
> procedure that does its own type checking, but it's ultimately a procedure
> nonetheless).

Except that I have told you repeatedly that not only am I aware of this.
EVERYONE is aware of this. Why are you repeating this same concept over and
over again. It feels as if this concept is blowing your mind and that you have
to share it repeatedly. I hate to break it to you. It's obvious to everyone.

That being said procedures and functions share a equivalence relation. They
are not completely the same. If they were then I could call a procedure just
like I can call a function, except I can't. Similar to how all programming
languages are turing complete but not exactly the same. They are isomorphic
but there are differences that can be identified and talked about. Following
your logic there would be nothing to compare among any computer language
because according to you everything is exactly the same.

>Composing functions is - when all is said and done - literally what my
pseudo-Forth example demonstrated, because functions are ultimately just
procedures. Forth literally has no type system (at least not by modern
standards), and yet it fulfills the literal actual definition of function
composition. But no, for some reason that ain't good enough, probably because
you seem to believe that there's no middle ground between clean composition
(what a type system actually provides) and no composition at all.

I know what you're doing. Get it through your head. I understand it 100%. No
need to elaborate. You are bending the system to imitate functions. You can
bend functions to imitate procedures as well. I have said again and again I
GET IT. Heck you can bend python to be just like forth and write an entire
forth compiler in python. Boom now there's no difference between python and
forth and therefore nothing to talk about.

Composition is a property of functions through and through. It is not a
property of procedures. You can bend procedures to imitate functions but as I
have said again and again and again we're not here to talk about the
isomorphism between all programming languages and styles of programming. We
are here to talk about differences. Functions Compose. Procedures Do not. Stop
telling me they are the same, it's like saying that You can drive a car just
like you can drive a hunk of metal. How? by making a car out of the metal...
just like how you made forth imitate function composition.

>Nothing composes unless you make up some definition on the spot. Composition
is meaningless unless you define what "composed" means in a given context. For
functions, they're "composed" by calling one with the other's output. For
procedures, they're "composed" by calling them one after the other. For
objects, they're "composed" typically by either aggregation or
inheritance/delegation.

You still don't get it. What do I mean by making up some definition on the
spot? It means that given two primitives that I've never seen before I have to
make up a new compose function. So for objects A and B I have to redefine a
new compose function composeAandB. For objects C and D I have to make a
composeCandD. What about procedures? How do I compose steps 1 and steps 6?
well the composeStep1AndStep6 Probably needs to insert steps 2 through 5 in
between to get a composition. Or if they shouldn't compose period how will a
type system or how will a computer know that these two things can't be
composed? It can't be done, you must manually prevent this. Arbitrary
definitions of composition the spot, that's what I mean. Custom logic
literally all over your code to glue things together creating more complexity
on every "composition"

When I say a compose function that has a universal definition I mean One where
given primitive A and given primitive B, two primitive in which I have NEVER
seen before I can compose the two AND the composition preserves context.

I have two functions one function takes sunlight as brightness levels and
outputs energy. Another takes energy and outputs locomotion or distance
travelled.

A :: SUNLIGHT -> ENERGY

B :: ENERGY -> DISTANCE

COMPOSE(g, f) = lambda x: f(g(x))

COMPOSE(A, B) :: SUNLIGHT -> DISTANCE

There it is, a single universal definition. New context is created without the
need of redefining anything new. True composition. I observed from your post
(unlike you) that you already know the definition of function composition.
What you didn't know is how it creates new context from old context.

>Not even the purest of functional languages can truly do that. Ultimately,
they rely on actual primitives for data; it ain't functions all the way down.
Even purely-mathematical functions need numbers to, um, func.

This is how I know you don't know what ur talking about. Do you know lambda
calculus? Where are the numbers in that? There are no numbers. How are numbers
represented in lambda calculus? Look it up. You can take a look at the Nat
type in haskell or idris. The fact that Int exists is abstraction leakage from
the lower levels of the computer. Literally Any data structure can be
represented by only functions including Numbers.

>The closest you can get to that is... (drumroll...) Forth, which treats any
word it doesn't recognize as a call to an imaginary procedure that pushes the
value of that word onto the stack, thus trying its darndest to make everything
pretend to be a procedure (and thus composable as such). Unsurprisingly -
because functions are just fancy procedures - this concept has worked its way
into the functional programming realm through concatenative programming.

Guess what. Procedures are just fancy functions. Your mind should be blown
right now. When you do forth programming you're actually programming in
Haskell. Understand? The two are one in the same. You really need to
understand this that underneath the computer doesn't know about context.....
All right I'll stop being sarcastic. Imperative procedures do not compose
unless you bend them to compose. That's all.

>Put differently: the "power" you feel when clicking together functions like
Lego bricks is possible specifically because of procedural composition; the
former is just syntax sugar around the latter.

There's no such thing as procedural composition unless you make it up on the
spot for every two procedures you want to compose. Again context is required
here. A type checker can't check how to compose two procedures.

You literally just called functional programming syntactic sugar. Literally as
if nobody knows that assembly language is imperative. Who on the face of the
universe doesn't know this? Everybody knows assembly language is imperative,
yet no one is uses the term "syntactic sugar" to describe every language built
on top of it, only you. You need to stop and open your eyes and stop
advertising the fact that procedures and functions are isomorphic as if it's
the greatest discovery known to mankind. It's not, it's universally well
known.
[https://www.wikiwand.com/en/Curry%E2%80%93Howard_corresponde...](https://www.wikiwand.com/en/Curry%E2%80%93Howard_correspondence)

Everybody knows about the call stack and everybody knows about the heap, so
stop with your "revelations"

The "power" you feel when using Forth comes from the fact that you are using
forth to imitate the power of functions and your brain is amazed that it can
be done with stacks and a pointer (revelation: everyone knows how it works).
You need to realize that the functions hold the power and forth is bending
over backwards to become a function. Assembly instructions are bad primitives
as well, literally we bend an instruction set over backwards so it can imitate
functions because that's where the power is.

~~~
crimsonalucard
Also check out the mathematical definition of function composition. Note how
there are type signatures. Function composition defined without types is
incomplete.

[https://www.wikiwand.com/en/Function_composition](https://www.wikiwand.com/en/Function_composition)

~~~
yellowapple
See the Scheme example in
[https://www.wikiwand.com/en/Function_composition_(computer_s...](https://www.wikiwand.com/en/Function_composition_\(computer_science\)).
Where, pray tell, are the type signatures? Come to think of it, where are the
type signatures in the page you linked?

You'll notice that said example is fundamentally equivalent to the pseudo-
Forth one I provided (except that mine doesn't take an arbitrary number of
functions/procedures to be composed, though it's perfectly feasible to just
call "compose" repeatedly to accomplish the same thing).

~~~
crimsonalucard
The mathematical definition is canonical. It is the one that preserves
context. The scheme definition is incomplete and allows for invalid
compositions to occur.

Forth is not composing anything. Remember that, it is imitating composition
through a collection of primitive commands. Composition exists in a function
as a primitive. Again no one is talking about isomorphism between procedural
and functional languages.

Composition exists as a primitive in functional, composition exists as a
concept of multiple commands in forth or other procedural languages. Get it?

------
hallihax
Maybe I'm just too set in my ways - but nothing in this article looks like a
solution to anything remotely relevant to most software development.

Difficulty in updating software isn't a new problem - but we already have
much, much better solutions than this. I don't think there's anything wrong
with changing existing code - and if changing existing code is causing you so
much pain that you're led down _this_ route, then you probably have bigger
issues.

Maybe this methodology just isn't for me - but I really don't see how this
improves anything from a development perspective. I only care about the
current state of the code - and if I need to care about the previous state of
the code, then I just check the history. The idea of having to understand the
entire history of a codebase just to grok the _current state_ seems insane.

------
hacker_9
Append style programming is very interesting, especially when thinking of
modelling future languages closer to natural language. This foray though seems
to end up generating more problems than it's worth.

For one, its not clear how this aids easier development - hasn't it just
shifted understanding of a complex program to understanding a complex
overlapping sequence of b threads instead?

Secondly, and I think the nail in the coffin, the compiler would have to be
brilliant at optimisation to make the final program performant. The idea of
'show ads' later being disabled by another overlapping b thread - does this
mean the network request is never performed? Or is it performed and the result
thrown away, wasting resources. How would an 'offline mode' requirement be
appended as a b thread in such a case?

~~~
codebje
The events are a scheduling mechanism, not the work done.

Performance probably relates more to the ratio of time spent selecting the
next event to time spent executing code between yields than anything else: a
thread waiting on an event joins a queue, a thread requesting an event joins
the same queue but also flags it as ready, and a thread blocking an event
flags the queue as not ready. When a queue is selected, it's drained
completely before a new queue is selected.

The run cost of the scheduler would depend mostly on the structure used to
maintain the ready set, but a basic double ended linked list would provide
linear time operations for ready/blocked management. Whole program compilation
could provide lookups or perfect hashes for event queueing, or partial
compilation could rely on imperfect hashing.

Optimising during compilation might work best by trying to fuse events such
that the scheduler is invoked less often.

Things that are not apparent from a lightweight article like this include
state transfer (how do I know details of the card inserted?) and the
likelihood that I'd keep the code to each b-thread and hack on those to make
new ones...

------
patientplatypus
Honestly, just don't wind your abstractions too tightly. It's the ultimate
"rookie mistake".

Just today, I had to write a simple navigation menu. Suppose that I have pages
to navigate to that have /a/ through /n/. I abstracted away the button, but I
didn't write a loop to loop over each route entirely. Why? Well, based on user
interactions I might want to have the buttons change shape or color and I
don't want to have to go back and unwind that abstraction.

Some people will look at that code and say its not DRY. Well, I like to make
code WET - write everything twice. There is some ideal amount of boilerplate
as well just so someone can read and understand the darn thing - if they have
a few examples to play with it'll be much easier for them to understand. Also,
it amuses me how much stake people put in nonsense acronyms, as if some sort
of borrowed linguistic authority will cover ineptitude and not thinking for
oneself.

Ultimately this looks like an over engineered technical fix for what amounts
to a social problem. Coding is ultimately a social issue (unless you're out
there grinding out 1s and 0s on cuneiform tablets) and requires social
solutions.

~~~
quickthrower2
The /a/ to /n/ thing - well of course it depends. It depends on the meaning of
those.

For example, if I write 1,2,3 can you guess the next number? It could be a
sequence, so answer=4, or just three numbers that happen to be 1,2,3 and the
next number I need to add is 8, and later I will remove the 2 (perhaps they
are hard coded user ids).

I am guessing your /a/ to /n/ structure is quite flexible. More like the
latter of my example.

If you are saying WET event if it is a sequence then I'd argue that's OK if it
is unlikely to change and it depends on other factors. Is there other metadata
that depends on the letter for example that would benefit from the loop.

------
kazinator
Yikes; this nonsense will turn into an incomprehensible ball of hair in the
implementation of anything real.

You don't know if anything will work or continue to work due to being randomly
blocked or otherwise interfered with by code that either already exists or
will be added tomorrow.

Basically this is just flow-based programming (FBP) in a new form.

[https://en.wikipedia.org/wiki/Flow-
based_programming](https://en.wikipedia.org/wiki/Flow-based_programming)

Every new generation of coders reaches a puberty characterized by
uncontrollable excitement with multiple threads of control doing little things
independently.

~~~
crimsonalucard
Flow based programming is basically functional programming. It feels like a
redefinition of a style of programming that's already heavily established.

~~~
yellowapple
I'd say it's more of a potential application of functional programming - that
is, you could* certainly use functional programming techniques/languages to
implement flow-based programming, but functional languages are not
automatically flow languages by merely existing.

* But this is not strictly necessary. See also: Unix pipes, which fit the definition of flow-based programming while being possible - and in fact frequently done - with very-much-not-functional languages both implementing the individual steps and connecting them together.

~~~
crimsonalucard
It looks like flow based programming is exactly the same thing as functional
composition. Can you illustrate to me how these two concepts are disparate?

I ask because if they are one in the same, then because the only way to link
functions in functional programming is through function composition than
functional programming is flow based programming.

~~~
yellowapple
> It looks like flow based programming is exactly the same thing as functional
> composition.

You can do flow based programming by composing functions, but - like I
mentioned in my previous comment - it is possible to accomplish without any
sort of functional programming at all (unless you consider Bash and Perl and C
to be functional programming languages, of course!).

> Can you illustrate to me how these two concepts are disparate?

Say you have a connection between three elements in the flow, like so:

    
    
        step1 -> step2 -> step3
    

Each of these steps _could_ be defined functionally (assuming that the maps
read lazily):

    
    
        step1 = [stream of digits of pi]
        step2 = map { _ * 2 }
        step3 = map { _ / 2 }
        flow = step3 step2 step1  # map [ map [3 1 4 ...] { _ * 2 } ] { _ / 2 }
    

Or they could be done procedurally:

    
    
        step1 = while [read digit from digits of pi] { write [digit] to stdout }
        step2 = while [read input from stdin] { write [input * 2] to stdout }
        step3 = while [read input from stdin] { write [input / 2] to stdout }
        flow = step1 | step2 | step3  # each digit -> digit * 2 -> digit / 2
    

With the pipe effectively defined as follows:

    
    
        | = spawn left; spawn right; while [read input from left] { write input to right }
    

The key thing here, though, is that whether the underlying language is
procedural or functional or whatever has very little relevance; what
_actually_ matters is that each of those steps can run concurrently,
continuously listening for new inputs and sending new outputs. The functional
approach does this by lazily reading values from a sequence. The procedural
approach does this by more explicitly running the three loops concurrently
(whether with preemptive processes/threads or via coroutines / green threads).

This all assumes there ain't any branches in that flow graph, though; it gets
a lot harder to represent a branch flow graph through functional means (not
impossible, but most functional languages that delve into this sort of thing
typically use Erlang-style actors and message passing instead of trying to
represent this directly with function composition), while in the procedural
case it's just deciding which output stream to use.

~~~
crimsonalucard
None of your steps are changing state. They are essentially functional in
terms of composition.

The sibling commenter had a better example of a tokenizer which changes it's
own state on every input.

~~~
yellowapple
The changing of internal state is not what defines "flow programming". The
actual _flow of data_ between concurrently-running processing steps is what
defines "flow programming".

But okay, here's a step with changing state:

    
    
        step4 = {
          previous = [read digit from stdin]
          while [read digit from stdin] {
            out = previous * digit
            previous = digit
            write out to stdout
          }
        }
        
        steps = step1 | step2 | step3 | step4

------
brianberns
I was recently tasked with making some small incremental changes to a large
code base that I was not familiar with. I was successful specifically because
I could see the existing implementation and use it as a template for making
the necessary changes.

For example, if I had to add a widget to a form, I could copy the
implementation of an existing, similar widget and then make the necessary
changes to it.

Similarly, I had to change the datatype of a field from string to integer all
the way from the UI back to the database. This was fairly easy because I could
make the change at one level, and then run the compiler to see what my change
broke one level up. Fix, rinse, and repeat.

If I had to do any of this _without_ access to the existing code, it would've
been 100x harder.

------
mannykannot
I am having trouble reconciling the examples with the description of how
B-threads work. Take the three-thread version of the 'water level' example: we
start with the first two B-threads waiting on 'waterLevelLow', but the third
thread is not waiting on anything, and presumably could advance to the point
where it is waiting on 'addCold' \- but it has blocked 'addCold' from
occurring (a fact used later in the example), so if it did advance before
'waterLevelLow' occurred, it would become deadlocked. I think there must be
more complexity here than is being explained.

~~~
mannykannot
When I slowed down the animation, I saw what was happening: the third thread
is waiting on 'addHot', and while (and only while) it is waiting, 'addCold' is
blocked - so there is no deadlock here, but is deadlock ruled out in general?

More generally, does the claimed ease of modification come at the cost of
expanding the amount of reasoning about concurrency that you have to do?

------
taowen
We have made a system with similar ideas to re-organize the business logic in
a way that is more continuous instead of scattered around (
[https://medium.com/software-engineering-problems/out-of-
the-...](https://medium.com/software-engineering-problems/out-of-the-tar-pit-
another-approach-b3e701e089ee) ). However, we do not believe event sourcing is
the answer to state management problem. It requires too dramatic change on
existing infrastructure and ways of human thinking.

------
magicalhippo
This article was way too short to make any clear case for how b-threads would
make changes easier to make.

In the end, our code is complex because what we want to achieve is complex.
Sure sometimes programmers add some extra complexity on top for no good
reason, but there's a certain level you just can't get away from.

Moving the complexity to understanding event streams does not seem to be any
improvement at all, at least as far as explained in the article. The toy
examples were entirely too simple to give any real sense of how this could
work in practice.

------
sadness2
Experience has led me to conclude that code maintainability is a Team.
Dedicated staff are needed to recommend cross-cutting solutions, roll them out
company wide, perform refactors and clean up technical debt. Making it a
"culture" is too hard. It means getting buy-in with regards to how crucial the
function is for the sustainability of product development, but once it's a job
description, it will actually happen.

------
sktrdie
Author here: sorry if the "append-only" analogy went a bit upstream in the
comments, it was certainly meant to be taken with a pinch of salt.

All of regular programming methodologies apply with b-threads. And most of the
time changing requirements would require one to change the b-threads involved
in that requirement. The append-only analogy was really more to showcase the
feature of incrementality.

Technically one can develop a complicated system just via append-only but
again you'll likely better off having clean b-threads that deal with specific
behavior organized in specific and clean ways the same way we organize any
other software.

From the comments it seems I wasn't successful at highlighting the key
insights of this paradigm. One read that I suggested in this article that does
a much better job at explaining these concepts is "The quest for runware: on
compositional, executable and intuitive models" [1] where the authors
essentially describe the need for something like Behavioral Programming
(authors are the creators of BP). Some quotes from the paper:

> As mentioned earlier, it will be possible to represent new requirements, or
> changes thereof, in a new behavioral component or module, with minimal
> change to the previously specified parts of the model, and without
> sacrificing executability and manageability. Such modules will be simply
> added to the existing model, virtually ‘piled-atop’ it, with no component
> specific interface, connectivity, or ordering requirements.

> In our vision, the units of the specification and models are not assembled
> in detail like resistors or chips on a computer board, or methods and fields
> in an OO-programming object class. The interweaving of behavioral modules
> will be facilitated by their reference to common aspects of system behavior
> described using shared vocabularies (for example, common events), and not
> via mutual awareness and direct communication between components. From the
> point of view of such a module, the other modules could be transparently
> replaced by new ones. In fact, the implementation of any part of the
> specification or a model should be replaceable by some kind of ‘invisible
> box’, whose implementation remains a mystery, and the effectiveness of the
> remaining modules and the integrity of the overall system behavior will be
> preserved.

And from the conclusion:

> What will happen if and when the human way of expressing requirements for
> systems will become almost indistinguishable from the way this is done with
> computer programs? And what will be the result of a level of
> compositionality that would allow humans to add capabilities to a system
> with much less dependence on that system than is possible today?

> When a collection of specification units grows over time, accumulating an
> unmanageable collection of patches, exceptions, and enhancements, it is
> likely developers will call for merging or refactoring them into much more
> concise artifacts. The new modules will replace the existing collection in
> all its uses (final executing system, official record of the specification,
> and means for communication between humans) without changing other parts of
> the specification. Such refactoring will be acceptable, even if it ends up
> being done manually, as it will focus on capturing the human’s revised
> perception of the affected behavior.

1\.
[http://www.wisdom.weizmann.ac.il/~harel/papers/Runware.pdf](http://www.wisdom.weizmann.ac.il/~harel/papers/Runware.pdf)

~~~
_pmf_
I liked the article (in fact for me it was a better introduction that the
"Liberating Programming, Period" article by Weizmann), but Hacker News is kind
of a lost cause regarding, well, "news". Everything that cannot be shoehorned
into their view of faux-FP has to be bad.

Someone should probably post this on lobste.rs, which is a bit more open
minded (I would, but don't have an account).

~~~
arunaugustine
There's already a discussion there:
[https://lobste.rs/s/tcequd/b_threads_programming_way_allows_...](https://lobste.rs/s/tcequd/b_threads_programming_way_allows_for)

------
punnerud
Gives me the feeling of business logic with only SQL and PL/SQL

------
munificent
I feel like Babbage when he said, "On two occasions I have been asked, 'Pray,
Mr. Babbage, if you put into the machine wrong figures, will the right answers
come out?' I am not able rightly to apprehend the kind of confusion of ideas
that could provoke such a question."

Maybe I'm missing some key insight, but this process seems like madness to me.

 _> Then we wouldn’t have to read and understand where to squeeze our changes.
We’d simply add stuff based on the new requirements and things would somehow
magically work._

But, at some point, _someone_ has to understand if those requirements clash
with the old ones that are currently implemented. The hot/cold water example
is, I think, telling. At no point in the article does the author _actually
state what the requirements (new or old) are._ You have some program that,
whenever the water level gets low, adds three hot waters. I guess that was
some original requirement.

Then another b-thread is added that also adds three cold waters. Now whenever
the water gets low, twice as much total water is added. Presumably that's OK?
When the second b-thread was added, was the requirement to fill the container
with cold water instead of hot, or was it to fill the container with twice as
much lukewarm water?

Finally, the third b-thread is added that interacts with the two previous ones
to cause the hot and cold water adds to be interleaved. What requirement does
that meet? The stated purpose of all of this is that you don't need to
understand the original program to modify it, but the entire purpose of this
third thread is to interact with the previous two. You have to know there are
already two b-threads going, which events they listen to, and what they yield
in order to create the third thread that blocks them. So I don't see how this
is any better than needing to read the code in a normal program.

I feel like I'm taking crazy pills.

~~~
hinkley
No, me too.

At one point, years into the Agile movement, we were wrapping up a very
difficult meeting and I said words to the effect, "You know, we knew going
into this that most of the problems in a project come from the requirements
process. We have tons of studies that show that. And instead of tackling that
problem, we just spent 10 years trying to get better at our part of the
development process. Why did we do that? Did we think we would shame the
business and management people into doing their jobs better?"

We are the drunk looking for his keys under the street light because the light
is better here, instead of in the alley where we dropped them. I want another
book called "the inmates are running the asylum" but about software
development instead of design concerns.

~~~
mr_crankypants
Scrum, in particular, is a fascinating beast. Looking at how it was in the
early days, you can see that a huge motivation was to try to get the non-
development bits of the process under control. For example, one of the big
ideas behind the whole sprinting thing was to limit the time during which the
requirements could be changed to one day in, say, every 10. The whole idea
behind "as a/i want/so that" was to try and make it so that a clear motivation
and context were being communicated to the development team.

And then, over time, it became clear that, for whatever reason, management
just wasn't going for it. So all these different bits and bobs pivoted and
morphed and re-scoped into a process by which the stone tries to squeeze more
blood out of itself.

IMO, the most valuable bit of Scrum isn't user stories, or story points, or
sprinting, or having a Scrum Master, or burndown charts, or anything like
that. It's the idea of having a single, designated person whose job is to tell
people, "no": The Product Owner.

The dirty secret is, velocity is a crap metric. It unskeptically measures
total things implemented, even though we all know that only a portion - I'm
guessing often less than half - of those actually needed to be implemented.
Meaning that the best way to increase a product team's _actual_ productivity
isn't to increase velocity, it's to maximize the average usefulness of
features being implemented. Preferably by identifying the useless and less-
useful ones ones, and deciding not to implement them. Or, equivalently,
identifying the ones that seem most likely to be useful given the currently
available information, and making sure those are always the ones at the top of
the to-do list.

My suspicion is that, the better your product manager is at that particular
job, the less need there is for any of the fancy ceremonies. Because most of
those ceremonies aren't there for the developers' benefit; they're really only
there to make it easier for the PO to scope and prioritize features.

~~~
hinkley
Scrum has been twisted into a way to gaslight developers by shaming them for
their estimating skills _when that was project management 's job the entire
fucking time_. It's the biggest dodge going right now. A massive case of
deflection and, dare I say, projection.

If the quality of your estimations has ever come up on your annual review,
that's them bargaining you down by making you feel bad about yourself.

Someone in a video I watched recently pointed out that story points per week
is a graph plotting time on both axes.

One of the earlier agile methodologies (FDD) had one thing figured out: the
law of large numbers works just fine for long-term estimation, as long as you
can identify the stories, and the range of story 'sizes' is within an order of
magnitude (eg, a day vs 2 weeks). You don't have to give a shit if a story is
4 points or 7. That's a waste of everyone's time and especially energy. _It 's
horizontal aggression condensed into a management model_. We need to start
refusing, as a collective, to engage. The only discussion you need to have is
whether this story is less than two weeks, more than two weeks, or _way_ more
than two weeks. Those happen at a much lower frequency.

~~~
mr_crankypants
There is one thing I really like about story points: Disagreement about them
drives a whole lot of useful conversation, and can reveal communication
problems and misunderstandings that are difficult to root out otherwise. That,
in turn, _should_ give the PO useful feedback to help with refining the
requirements. Which means that story pointing meetings should, in theory, have
a huge multiplier effect on productivity, where every hour spent on activities
like story pointing saves many hours effort wasted on building unnecessary or
mis-scoped or miscommunicated features.

But that requires a very engaged PO who really gets what grooming is really
about. Also, I don't know what it is with MBA types, but it really seems like
anything that can be turned into a KPI, will be turned into a KPI, without
ever pausing to think about whether it makes any sense to do so. And that
makes story points radioactive: In the absence of intense and intelligent
regulatory oversight, their potential value is more-or-less negated by their
potential for abuse and misuse.

Incidentally, this is what fascinates me about the Forth approach: The
unflinching dedication to stripping the system down to only the things that
you actually need. The problem I see is, the Forth way of doing it seems to
assume you're working with an army of one. How do you scale that up to a
modern product team that may comprise 10, 50, 100, even 1000 people?

------
HenryDavis65
paywall

