
Defunctionalisation: An underappreciated tool for writing good software - Smaug123
https://www.gresearch.co.uk/article/defunctionalisation/
======
diegoperini
Great article, great technique and lot of on-point advice for intermediate
level programmers, BUT, it has the same problem many functional programming
advocacy posts suffer from: the code examples are given in a language everyday
programmers will probably not recognize.

Don't get me wrong, I love functional programming. I use Haskell and OCaml
with joy. I also read about Idris, F#, Elixir and such quite often but I can
also remember when all of this was alien to me.

Useful advice requires accessible set of examples and the very first example
in this article (basic calculator) already make use of sum types, pattern
matching, higher order functions and recursion, in a programming language with
a relatively low adoption rate.

I don't know a solution to this issue without giving extra burden to the
author. They can use one pure functional language alongside with a widely
used, strongly typed language (Typescript, C++ etc) in their examples but
that's probably too much to ask for.

Maybe my understanding of the target audience is wrong and my whole criticism
is obsolete. Please correct me if that is the case.

I have friends with 5+ years of industry experience with languages like C#,
Javascript, Java, PHP and they tend to verify my claims about accessibility of
these type of articles.

Does anyone agree/disagree?

~~~
Smaug123
I (author) agree with you. The target audience of this post was originally my
coworkers, if I even targeted it at all; but there is certainly room in the
world for more basic explanations targeted at people who aren't used to the
idioms of functional programming.

The trouble is really that defunctionalisation is much, much easier if you've
got sum types, pattern-matching, and higher-order functions. (It's not clear
to me that it has any use at all if you don't have higher-order functions.) Is
it worth the time trying to implement this sort of pattern in C#? I don't
know. Insofar as C# is nice to write, it's because the IDE writes so much of
the boilerplate for you, and no IDE is set up to admit this kind of pattern.

~~~
barrkel
C#'s Expression<T> is almost exactly what you use in your example, baked into
the language - you specify a lambda but the compiler emits construction of an
expression tree.

[https://docs.microsoft.com/en-
us/dotnet/api/system.linq.expr...](https://docs.microsoft.com/en-
us/dotnet/api/system.linq.expressions.expression-1?view=netframework-4.8)

[https://docs.microsoft.com/en-
us/dotnet/csharp/programming-g...](https://docs.microsoft.com/en-
us/dotnet/csharp/programming-guide/concepts/expression-trees/)

~~~
Smaug123
Indeed, we have used `Expr` and the `ReflectedDefinition` attribute internally
in F#. However, if the user is capable of giving you literally anything,
you'll struggle to optimise it! We've found it important to artificially
restrict what the user can give us, by explicitly modelling the domain.

------
kkdaemas
I might explain this concept to OOP-minded programmers like this:

Sometimes, you can improve your code by having it return a description of what
to do, rather than doing the thing directly. This is like in SQL where you
might return a query plan, rather than executing a query. Once you have this
description, or plan, you can analyze, transform and inspect it before passing
it to some execution engine that actually does the work.

~~~
gambler
_> I might explain this concept to OOP-minded programmers like this:

Sometimes, you can improve your code by having it return a description of what
to do, rather than doing the thing directly._

...which requires you to turn functions into data structures and then
interpret those.

Meanwhile, in OOP land you have objects, which can be seen as self-
interpreting data structures.

~~~
kkdaemas
I don't claim that this technique is impossible in OOP; it's just not as
natural without discriminated unions and match expressions.

~~~
gambler
You're missing my point. In OOP every object _is_ computation represented as
data. It's not some kind of "unnatural" design pattern programmer should
concoct in a special way. Erasing the distinction between data and procedures
(or getting rid of both, if you look at it in another way) in one of the
fundamental ideas that lead to creation of OOP in the first place. It sounds
like Alan Kay was more interested in getting rid of data, but it does work
both ways.

Thus, saying "let me explain this to OOP developers" is highly ironic.

If that's not clear, let's go back to your description:

 _> Sometimes, you can improve your code by having it return a description of
what to do_

Every object is (or at least should be) "a description of what to do" in the
exact sense you're using here. This is crucial to understand for properly
using OOP.

~~~
sixbrx
I think that's a vision of OO that not many OO programs that I've seen
actually follow. It's more common to see OO programs where object methods
represent the computations, not the whole objects, while the bundled data is
used to store resulting state of the computations, with much or all of the
data hidden to preserve invariants for that state. Just acting directly on the
data like that without an intermediate representation of the computation
itself is just the sort of liberty that the video I mentioned in the sibling
post, warns about that can turn into unwanted constraints in some cases.

~~~
gambler
_> I think that's a vision of OO that not many OO programs that I've seen
actually follow._

Maybe not, but it absolutely was part of the original vision. This is why
Smalltalk 80 implements if/else statements and loops as methods, rather than
keywords. A boolean in smalltalk is not just a value. It's a latent algorithm
for choosing between two blocks of code at some later point in time. It's
_also_ a latent algorithm for operating on other booleans via binary logic
methods. Until you start seeing objects this way, you will not be able to
appreciate the elegance of object-oriented programming.

Here are some examples of how this can work in non-trivial scenarios:

[http://www.vpri.org/pdf/tr2007003_ometa.pdf](http://www.vpri.org/pdf/tr2007003_ometa.pdf)

[https://bracha.org/executableGrammars.pdf](https://bracha.org/executableGrammars.pdf)

------
lilactown
There are, of course, tradeoffs to using defunctionalization (I've also heard
this called a "data DSL"). I have seen these tradeoffs often ignored or argued
past when discussing various solutions that take advantage of it.

The cons to defunctionalization that I have experienced are things like:

You now not only need to test your application, but also the runtime that
turns this data representation into actual work. Bugs can now occur in the
runtime, in the reification of the app logic, or in the integration between
the two.

If your application is particularly concurrent or lazy, then ensuring that
your DSL works well with your main language's concurrency and laziness
machinery can get pretty hairy when you start executing your side effects.

It becomes harder to leverage your languages developer tools; breakpoints and
debugging often end up in your DSL runtime's code, not your application code,
often requiring special-purpose tools to be built to debug your
defunctionalized DSL.

Performance can also be a double-edged sword. On the one hand, you can do some
very clever things; use tricks like memoization, all the way up to writing a
JIT compiler for your defunctionalized DSL to improve performance. However,
you're taking on that work due to the fact that your main language's runtime
can no longer do that work for you. Often these data DSLs end up allocating a
lot of data structures that end up being parsed and thrown away later, and
those allocations increase work in cleanup and the parsing itself.

I also heavily question the efficacy of testing these data DSLs. It is
objectively easier to test pure functions, but on the other hand, how do you
validate that they are correct? Often we don't care about the actual data
representation, we care that it does the work it describes; properly testing
then essentially becomes a re-implementation of the DSL runtime with mocks /
etc.

For a concrete example of all of these tradeoffs, take a look at React in the
webdev world. React is unequivocally a good idea, but it has required a
massive investment from the React team and the ecosystem to make it correct,
make it fast enough, to create developer tools for it, and to figure out how
to effectively test applications that use it.

------
quantified
This one might be a bit more approachable:
[https://blog.sigplan.org/2019/12/30/defunctionalization-
ever...](https://blog.sigplan.org/2019/12/30/defunctionalization-everybody-
does-it-nobody-talks-about-it/) as originally shared by
[https://news.ycombinator.com/item?id=21916774](https://news.ycombinator.com/item?id=21916774)

~~~
wool_gather
There appears to be a fuller version of this talk on the author's own site:
[http://www.pathsensitive.com/2019/07/the-best-refactoring-
yo...](http://www.pathsensitive.com/2019/07/the-best-refactoring-youve-never-
heard.html)

Agreed it's quite good.

------
Smaug123
Author here: I'm happy to elaborate on any of this, except where I've signed
NDAs about specific projects.

Also an obligatory "we are hiring" on behalf of G-Research; feel free to get
in touch at patrick.stevens@gresearch.co.uk or patrick+hn@patrickstevens.co.uk
if you're interested in quant finance research/development in central London.

~~~
hhmc
G-Research should come with the health warning of just how litigous and
paranoid around IP theft they are. My understanding is that you can't have a
personal phone whilst at work, you're weighed on the way in and out, and they
sent one of their former quants to jail for several years.

One might suspect that the periodic renames (De Putron, Glouster Research, G
Research) are mostly a tactic to distance themselves from the negative image.

1\. [https://www.bloomberg.com/news/features/2018-11-19/the-
tripl...](https://www.bloomberg.com/news/features/2018-11-19/the-triple-
jeopardy-of-ke-xu-a-chinese-hedge-fund-quant)

~~~
hermitdev
Its quite common in the quantitative finance world. Unscrupulous people steal
a model, try and peddle it to a competitor. Thing is most models only work if
noone (or only a few others) are doing the same. Usually when approached, most
firms stay above board and report it to the person's employer.

Citadel, for instance sued a former employee for steeling a model (he emailed
the source for a model to his personal email. He was sued in federal at 8am on
a Monday, fired at noon. Criminal charges came months later. Tried dispose g
of the evidence by tossing hard drives in the Chicago River. Dive teams were
involved to recover the drives. But, Citadel already had all they needed
because they monitored all outgoing and internal communication, including
MITMing SSL email services.

~~~
hhmc
I'm fully aware of the sensitivites of the quant finance world, but there are
plenty of high quality workplaces out there that don't have the extremely
invasive approach to security that e.g. g research do.

------
dgb23
Fantastic article. I didn't know about this FP terminology but this seems to
be a common, general concept.

While reading it immediately reminded me of:

(1) re-frame (ClojureScript) and Redux (JavaScript), which are both web-
frontend libraries for managing state and event handling. Browser events
dispatch plain, serializable data instead of invoking behaviour directly,
which are in practice tagged union types, kind of similar to the provided
examples? (I'm not very familiar with the syntax in the article).

(2) it seems to be a common idiom in Rust to move branching logic into pattern
matching and enumerations. So defunctionalisation is applicable here and is
likely very common to refine Result types and defer execution.

(3) as user edflsafoiewq mentioned this is strongly related to the OO Command
Pattern

------
jonahx
Prior Art: "The Best Refactoring You've Never Heard Of"

[http://www.pathsensitive.com/2019/07/the-best-refactoring-
yo...](http://www.pathsensitive.com/2019/07/the-best-refactoring-youve-never-
heard.html)

~~~
symstym
I think this is a much better introduction than TLA, thanks.

------
6gvONxR4sf7o
This seems to be advocating to turn program programming into compiler
programming. Like it took "every sufficiently complicated program reimplements
half of lisp" and said that was an admirable goal and here's how to do it
well.

I only skimmed the initial algebra talk linked out, but it seems to mirror
what any given programming language does with your code. It takes your text,
does some processing to turn it into an AST, then turns that into a compiled
program and runs it. AFAICT the initial algebra suggestion says to basically
program directly with your AST wherever you can, and then write code to turn
that into a program. Now you've written an AST and a compiler, but naively
interpreting that AST will probably be slow, so maybe you work in some
optimizations. And you don't always write correct code, so you have to add a
debugger too.

Now you've just written a programming language and toolset. Why not just use
an existing language and mature toolset? Is it because your DSL might be
constrained to the original problem well enough to be simpler than anything
general purpose?

Replacing e.g. `map` and a native function with `MyMap` and a few of my own
functions seems to be throwing out all the good that comes with a mature
language ecosystem.

~~~
Smaug123
Yes, I think you've basically summed it up very well!

One of the reasons we use this pattern internally is because we have quite
specific performance requirements which are not well-served by your standard
language compiler/runtime. If we handle some of this compilation workload, we
can make sure we emit constructs in the underlying runtime which have the
right performance properties. We also want to make sure our users don't really
need to think about this sort of low-level mucking around with performance; so
we use the initial algebra to expose natural data-driven abstractions to them,
which we then carefully manipulate into the right forms for the .NET runtime
to have our desired performance properties.

This is an area where F# really shines. Its "computation expressions" make it
really easy to construct DSLs embedded in F#. We offer our users this DSL, and
we promise that anything written using this DSL will be appropriately fast and
safe; but we do sometimes give them an escape hatch. By encouraging the user
to stick to this heavily restricted DSL, we make it easier for them to write
code that we guarantee will perform well. If the code doesn't perform well,
that's a big problem, but crucially it's a problem for the _devs_ to solve,
not for the quant researchers to waste hours digging into.

Entirely separately from the above, we can also offer certain safety
guarantees by restricting to our DSLs. One project in particular has involved
taking something that previously existed but required a lot of manual
procedural bookkeeping on the part of the user, and extracting the "intent" of
the library in a purely data-oriented DSL. As long as the user sticks to our
DSL, they don't need to consider how their computations are sequenced; we'll
do that for them, in the process of converting from the DSL into the
underlying system.

~~~
6gvONxR4sf7o
Are you effectively interpreting it, or are you compiling it to something
else? I'm really curious how you do this in a performant way.

~~~
Smaug123
We have places where we ultimately emit IL (the bytecode of .NET). However,
usually we just emit F#, restricted to certain constructs which are
allocation-free and so forth.

We aren't producing F# in the sense that we have to invoke the F# compiler
ourselves, though. In the calculator example, given the description `Add
(Negate (Const 5)) (Const 3)`, we might ultimately end up having assembled an
F# function in memory like `fun () -> -5 + 3`; when this function is invoked,
the value `-2` will be calculated. Ultimately we usually put together the
functions in the usual way you make a function: by composition of smaller
functions. The very simplest expressions like `Const 5` have a simple template
like `fun () -> 5` which we can just directly produce; more complex
expressions have to be interpreted recursively into F# function objects.

I agree it's a bit hard to explain unless you're trying to solve a less
trivial problem :(

------
kazinator
Lisp hacker summary.

Defunctionalization is the elimination of function objects (closures) from
run-time representations.

Functions that don't carry an environment (such as global functions) can be
replaced by symbols. For instance, a simple calculator's binding for the
negation button - can use a symbol such as Negate instead of the #<closure>
for the negation function.

Functions that carry and environment can be replaced by objects which
represent that information with explicit properties. The object can somehow
later be used as if it were a function anyway. (Either a closure can be made
which references the object in its environment, or else the object itself can
be callable like a function.)

These representations are more readable than #<closure> when debugging,
usefully susceptible to manipulation by code, and susceptible to validation.

------
kqr
In the testing example, I honestly don't see the value of the suggested
approach over passing in two continuations. In production, the actual I/O
methods can be passed in, and in testing we can pass in functions that
validate their arguments.

This "dependency injection" type approach would be... functionalisation (?)
i.e. the opposite of the suggested approach. But it also leads to greater
decoupling.

I often find refactoring in the spirit of functionalisation _more_ powerful,
because then what I do becomes less about implementing a specific piece of
logic, and more about creating a robust library of combinators that can be
puzzled together to implement the desired business logic. In my experience,
getting combinators right is easier but also more productive.

With the suggested approach, changes to the logic requires mirroring sets of
changes across many locations (because of the expected protocol of
discriminated cases), whereas the functionalised code requires changes only
either in the continuations or in the combinator, but generally not in both.

~~~
gridlockd
> I often find refactoring in the spirit of functionalisation more powerful,
> because then what I do becomes less about implementing a specific piece of
> logic, and more about creating a robust library of combinators that can be
> puzzled together to implement the desired business logic.

The problem here is that your state is almost always transient and non-
inspectable. Almost all the complexity in any non-trivial program is in the
state, whereas the transformations are usually trivial and things like
combinators don't really matter one way or another.

------
BurningFrog
What programming language is that?

I can't read it enough to understand the article.

~~~
spawarotti
Looks like Haskell to me.

~~~
Tyr42
It's similar, but Haskell uses :: for types, and : for building lists, while
the other ML family languages (and typescript) uses : for types, and some use
:: for lists.

Also Haskell doesn't have modules, so I thought it was ML. Though I think
someone said it was F#? I don't know those well enough to tell them apart.

~~~
exceptione
Haskell has modules.

~~~
Tyr42
uh, parametric modules? Haskell tends to use Typeclasses instead. (Yes, I know
each file gets called a module, but that's not really the same as seeing a
"module" keyword in the file). Unless backpack counts? I haven't been
following it.

I know there's a thing in Coq/ML/OCAML which uses the keyword `module` and can
do some of the same things Typeclasses can do but aren't exactly the same. I
don't know them well enough to explain them, but I know Haskell doesn't have
them.

[https://gitlab.haskell.org/ghc/ghc/wikis/backpack](https://gitlab.haskell.org/ghc/ghc/wikis/backpack)

------
Filligree
The article makes a lot of references to an 'initial algebra', and there's a
link, but it goes to a PDF of a talk. I didn't get anything useful from just
the PDF itself.

Is there a video of the talk?

~~~
Smaug123
When Skills Matter went bust, the only public recording I know of that talk
vanished from the Internet. They've found a buyer now, though, and hopefully
soon the talk will return. I will suggest to the talk's author that he hold it
again so that he can put the recording somewhere more permanent.

------
luord
The article was good enough. There was a lack of mentioning the many drawbacks
that this approach can have[1], but it was a well written description of it.

Many of the comments, however, are... Not quite as good; there was a bit of
elitism and maybe even zealotry[2]. As usual, it soured an interest in FP (not
that I've ever needed it) that an even slightly humbler approach might have
fostered.

Now I'll dabble a bit in the self-congratulatory tone of some of the comments:
I have never written a line of F# and the last time I dabbled with a primarily
functional language was a couple of toy projects in Clojure (quite different)
years ago and yet I was able to follow along the code and the explanations.

Functional approaches aren't all that different or revelatory for someone with
enough development experience[3] and it's just a different approach, who would
have thought it?

[1]: Not unexpected; functional programmers who've convinced themselves that
FP has no drawbacks are unfortunately common. I'm not saying the author is one
of those, though; he probably thought the article was long enough as is.

[2]: Also not unexpected in any discussion about paradigms, _specially_
functional programming.

[3]: I've seen the general idea discussed here in different more multiparadigm
languages.

------
bcrosby95
I don't completely understand this technique, so maybe I'm off in my
interpretation of it. But it seems like something, flexibility-wise, between
hard coding things and embedding a scripting language.

Back in high school I liked to work on MUDs (multiplayer text roleplaying
games). It feels like this is sort of thing we resorted to a lot. Content
creators that couldn't code could use a text UI to build up spells, skills,
etc that the code would use to apply the specified effects. We had basic
things (such as "fire damage") and more generic modifiers (such as "make the
next thing apply to everyone close to you rather than just 1 person").

And every step of the way other subsystems could intervene and modify the
outcome, so someone could e.g. create a monster that nullified all fire damage
someone tried to deal to anyone nearby.

For people that did know how to code, we had a basic scripting language for
more flexibility. But these scripts still plugged into the above system.

~~~
habitue
Yeah, it's very similar conceptually to adding a scripting language. But the
point isn't to add user extensibility to the system, it's to have an
introspectable description of the program.

So in the scripting analogy, what you'd do is take the program you wrote in
C++ or whatever, and write a little mini-language that allows you to express
all of the functionality needed to write the game in. So for example, C++
features like template metaprogramming aren't necessarily something you need
to write games, but "spawn an NPC" is a very common task, for example. You
reduce this language to just whats needed to write the game with.

The difference from a scripting language is that instead of the game being
written in C++, with a little interpreter for the scripting language, instead
the C++ is just a compiler for the scripting language. You take the script
description of the game, and compile it down to the final game.

That little level of indirection allows you to do things like optimization,
but also lets you do things like easily swap out how the program is evaluated.
For example, in a graphical game, you could skip rendering to the screen, or
advance time in jumps instead of needing to wait for timers, etc.

------
pierrebai
So.... this is just a fancy name for the advice "give your functions a name"?

The example he gave is just replacing an anonynous lambda for addition with a
named function call Add?

Maybe I'm too critical, but this is a thing I see in many fields:

1\. Take a simple well-known idea and give it a new obscure name. 2\. Write
blogs, give lectures, organize summit and conventions. 3\. Profit!

~~~
barrucadu
Add is a piece of data, not a function. The equivalent in C would be something
like:

    
    
        enum function { add, ... };
    
        // then inside your code
        switch(myfunction) {
        case add:
            return a + b;
        ...
    

Defunctionalisation is about removing higher-order functions, not about naming
anonymous functions.

It's not a new trend, the first citation on wikipedia is to a paper in 1972.

~~~
pierrebai
You're focusing on my not knowing the syntax and specific of the languages.
It's still just giving names to things that did not have a name. Whether the
name is a variable name, a function name, a type name, doesn't matter.

It's still a fancy, obscure jargonish way of saying "name things".

~~~
dgb23
There is naming but that is not the point. The point is that you turn
functions into (serializable) data. Nothing fancy happening here.

------
moron4hire
A lot of this and a few other links in this thread end up looking like
creating an AST limited to the operations you want to expose to the user, in a
serializable way. I think that's even called out explicitly in the articles.
But I'm left thinking "what is the serialized structure of an AST?" It's
source code. So we've implemented a language that then we have to use from
within our code that is not as ergonomic as the host language.

In a way, it's kind of like type-safe eval(), with the limited nature of the
purpose-specific implementation creating a sandbox of types around the eval,
i.e. the impl isn't complete enough to give us turing completeness, File IO,
or other things that would let the data turn into executable code that could
escape the sandbox. So I'm curious to see if some wrappers around Roslyn[0]
could get us less verbosity than Roslyn itself, a more complete impl that
doesn't have to be written for every application, and the sandboxing necessary
to prevent it from being handing the user a fully-automatic submachine gun
pointed at our database.

Maybe it's something like a white-list for Roslyn structures. You could take a
C# file, parse it with Roslyn, filter through the white-list, and either
error-out if the user is doing something nefarious, or compile the structure
and continue as normal. And then the code that we pass around as data could
just be C# code, rather than having to have our own AST structures that we
serialize on our own.

Maybe you could even allow limited forms of looping by injecting infinite-
loop-breaking code that does some simple pre-/post-condition checking. Like
"while(true) loops are only allowed to execute 10K times", or "for loops need
to be bounded correctly for the update expression, and the index must not be
modified in the body".

[0] Side note: my office is in a neighborhood called "Rosslyn", so I'm
constantly misspelling one or the other.

------
ridaj
These are very useful patterns!

I feel that the thing that I haven't found my way around about this style of
programming is how to make the types scalable. Once the type `Expr` is set to
include literals, addition, substraction, and custom functions, there's no
clean way to extend it in a modular fashion with more _typed_ operations (eg
say you wanted to add multiplication).

When I say modular I mean either in a separate file, or in any other way that
doesn't turn every function of `Expr`, over time, into a giant, unreadable
pattern-matching statement spanning multiple pages once there are multiple
dozens of operators in the type. Any suggestions for that?

------
jawarner
This could benefit from a discussion of when it is and when it isn't useful or
practical to apply the technique described.

It helps with serialization, and optimization in cases like deep learning
frameworks. In other cases I think it smacks of overengineering. If you need a
representation of a function along with its arguments to pass to other
systems, then do it as needed. Otherwise, YAGNI.

------
sodaplayer
I have a budding interest in ML-languages and was seeing a couple of
references to category theory in this article and the linked slides. Are there
any good resources for learning category theory applied to ML programmers or
is there a list of category theory concepts that programmers will often come
across?

~~~
urxvtcd
I think this is fairly popular, though I have not read it myself:
[https://bartoszmilewski.com/2014/10/28/category-theory-
for-p...](https://bartoszmilewski.com/2014/10/28/category-theory-for-
programmers-the-preface/)

------
brianberns
As an F# developer with just enough category theory to follow along, I think
this is really great.

I find the workaround for existential types in .NET particularly interesting,
but it seems so verbose. Is there no way to do it with plain functions in
order to avoid all the explicit type signatures?

~~~
Smaug123
Thanks!

I'm afraid I don't know of any way to make the existential types hack neater.
If you find one, I'm all ears!

There is an issue open with the F# compiler
([https://github.com/fsharp/fslang-
suggestions/issues/567](https://github.com/fsharp/fslang-
suggestions/issues/567)) to allow higher-ranked types, but there are (well-
founded) objections that this could be a pretty big jump in language
complexity.

------
contingencies
_The major thing that we found was that you had to look at the whole problem._
\- Joseph Henry Condon, Bell Labs

... via
[https://github.com/globalcitizen/taoup](https://github.com/globalcitizen/taoup)

------
d3ntb3ev1l
Guessing one of the highest velocity teams on the planet

------
teyc
nice! I haven't seen this expressed in functional programming, but it feels
more natural in ML.

Is it possible to nest these computations?

~~~
Smaug123
I'm not quite sure what you mean by "nest". You can certainly have a
computation expressed in terms of a defunctionalised initial algebra, which
contains computations expressed in terms of different defunctionalised initial
algebras.

------
lincpa
Correct use of functional programming:

[The Pure Function Pipeline Data Flow v3.0 with Warehouse / Workshop
Model]([https://github.com/linpengcheng/PurefunctionPipelineDataflow](https://github.com/linpengcheng/PurefunctionPipelineDataflow))

1\. Perfectly defeat other messy and complex software engineering
methodologies in a simple and unified way.

2\. Realize the unification of software and hardware on the logical model.

3\. Achieve a leap in software production theory from the era of manual
workshops to the era of standardized production in large industries.

4\. The basics and the only way to `Software Design Automation (SDA)`, just
like `Electronic Design Automation (EDA)`.Because [The Pure Function Pipeline
Data Flow] systematically simulates integrated circuit systems.

