
Pragmatic Functional Programming - tosh
http://blog.cleancoder.com/uncle-bob/2017/07/11/PragmaticFunctionalProgramming.html
======
majewsky
Pretty disappointing article. All the arguments are pretty weak. For example,
the article argues that FP is necessary going forward because immutable data
helps avoid race conditions. However, immutable data does not make a
programming language functional. I could just use Rust, which (although it
incorporates functional constructs) is still clearly imperative, and avoid
race conditions through its particular implementation of immutable data
structures.

The most upsetting part is when the author brags about how Clojure programs
can manipulate itself at run-time, on a website that's literally called "Clean
Coder Blog". Self-modifying code is a clear opposite of clean code. Also:

> Maybe we don’t have to worry about chips with 32,768 cores on them.

Author apparently has never written a GPGPU program.

~~~
m_mueller
I have another gripe with these kinds of immutability / FP praising articles:
The assumption that this must be faster, because multicore. As an example,
many if not most of scientific applications (for which Haskell wants to
position itself) are memory bandwidth bounded, not CPU bounded. These
applications also deal with Gigabytes of data for each timestep, applied over
dozens if not hundreds of kernels. So we're talking about thousands to tens of
thousands of functional updates over data objects, each on the order of
Gygabytes. The way this is dealth with efficiently in many cases is by doing
pointer swapping - after each timestep, or even for intermediate kernels, the
old output becomes the new input and the old input is overwritten by the new
output. Swapping allows this to be an O(1) operation. Allocating new memory at
each kernel launch for the output would absolutely kill the performance -
we're talking one to two orders of magnitude here. Even if an FP compiler
comes along that is able to optimize this, if this ever breaks I have zero
trust that it will be a simple thing to figure out why the compiler doesn't do
what I wanted, and at that point I'd rather just program it procedurally. To
my knowledge, game programming is pretty much the same thing, so these
usecases are rather wide spread.

I think there must be a (probably not yet invented) language somewhere that
would allow me to write computations in immutable FP style, either as
pointwise operations (i.e. stencils), or as matrix operations, _but_ allows me
to pass in some information that directly instructs the backend what it should
do with the memory. I.e. I want to be able to program my own memory pools and
link pointers to memory slots needed for the computations. Maybe that part
could even be done using visual programming, akin to linking up GUI elements
to properties in XCode: Let the compiler figure out where a memory slot is
needed, do a dependency analysis for the memory slots and show graphically
what slots are linked to what input and output. Then, let me add constraints
to that linking and recompute a new solution whenever I move something around.
The next step would be to do a performance model for each processor and figure
out the best solution using machine learning, _but_ let me actually see what
the solution is without me having to go look at assembly (which, frankly, for
FP frontends seems near impossible since it has even less to do with the
frontend code than if you compile down something like C or Fortran).

~~~
Recurecur
Exactly. I've actually been giving this a fair amount of thought, as it also
relates to "GC for everything", another obstacle for real time programming
(soft or hard). I like many things about the FP approach, especially in terms
of reasoning about the code. I just think the computer science community needs
to focus on basically what you're advocating, immutable recycling alongside of
garbage collection.

It seems to me that this is a fairly easy problem really, and things should
move to more of a mixed memory handling model, where one could use one or more
of manual memory management, recycling, or GC as needed. FP language scala-
native is providing both manual management and GC already, for instance...

~~~
m_mueller
Thank you. Yes, well, from the CS people I usually just get blank stares.
"What did you say, you prefer _Fortran_ "? I feel like there's a big divide
between engineering and CS. The latter would rather never care about anything
to do with hardware. IMO the real innovation usually comes at the intersection
of hardware and software - like the recent boom of neural nets. Those things
are known since the 80ies and then promptly dropped because the hardware
wasn't ready, until someone 30 years later took the time to make a good CUDA
library and make it work fast enough to be useable. Meanwhile the CS community
focuses on adding more abstractions until no-one can figure out the hardware
implementation anymore. Sorry for the grumpyness, guess my mood isn't the best
today...

------
onikolas
"The speed of light limit had been reached. Signals could not propagate across
the surface of the chip fast enough to allow higher speeds."

Sorry, this is blatantly wrong. The 'wires are too long' problem was solved a
couple of decades ago by adding pipeline stages.

Processors can easily get faster, today, by increasing the clock speed.
Problem is, the generated heat fries them. Heat is a function of voltage and
frequency. You can only lower voltage so much before switching states becomes
unreliable. After that the only option is to lower frequency.

~~~
js8
I am not an expert, but I heard differently. IBM z13 CPU had to decrease the
frequency compared to its predecessor, because they had a rule that the CPU
should be able to operate on contents of L1 cache in one cycle, and they
decided to expand L1 cache a bit. So I think "wires are too long" problem is
alive and well when it comes to L1 cache size.

~~~
randcraw
Of course, long circuit lines will limit further increase in clock frequency.
But in practice, heat is the lower bound. In CMOS, heat arises from charge
dissipation, or voltage times clock frequency. Unless we can switch to
transistor fabric that does NOT dump charge with every state change, heat will
remain the dominant problem for future CPU speedup.

Despite what some claim, multicore is no solution to speedup. Coarse or fine
grain parallelism is useful only when data parallelism is possible, which
applies to less than 5% of today's software (graphics, speech, deep learning,
et al). Functional programming may free code from explicitly addressing
variables, but FP's implicit addressing does not magically eliminate
sequential dependencies.

The vast majority of processes in mainstream software are inherently
sequential. Simply implementing them in an FP won't make them parallel, so
adding threads won't increase speed.

------
TheCoelacanth
> Now you know 95% of Lisp, and you know 90% of Clojure. That silly little
> parentheses syntax really is just about all the syntax there is to these
> languages.

And yet I still can't do anything useful with them. A language isn't just the
syntax, it's also all of the functions that are defined by default.

------
crb002
FP is going to kill it when compilers catch up to utilize new SIMD
instructions.

I think we are going to see more per core "L0" caches and less reliance on the
shared "L1" cache. With that many caches to juggle you want a strongly typed
compiler doing it, not by hand.

