For anyone interested, Dr. Lamport has a book called "Specifying Systems", which has more in-depth explanations of many of the topics in the linked paper:
My question is: Turing Machines are easily and intuitively understood as state-machines. But what about Functional Programming? I thought functional programs have no 'state'. What is the simplest way to explain how functions are also state-machines?
> What is the simplest way to explain how functions are also state-machines?
Here is a complete specification of a state machine's transitions:
f LightOff OnPressed = LightOnDimmed
f LightOnDimmed OnPressed = LightOnMedium
f LightOnDimmed OffPressed = LightOff
f LightOnMedium OnPressed = LightOnBright
f LightOnMedium OffPressed = LightOff
f LightOnBright OnPressed = LightOnDimmed
f LightOnBright OffPressed = LightOff
Here is a Haskell function which models the above state machine's transitions:
f LightOff OnPressed = LightOnDimmed
f LightOnDimmed OnPressed = LightOnMedium
f LightOnDimmed OffPressed = LightOff
f LightOnMedium OnPressed = LightOnBright
f LightOnMedium OffPressed = LightOff
f LightOnBright OnPressed = LightOnDimmed
f LightOnBright OffPressed = LightOff
It seems to me it calculates the result given some input. Can you incrementally give it new inputs and have the result depend on not only the new input but also all previous inputs?
Where are the previous inputs and results (= state) of the calculation stored which allow the next calculation depend on previous results? In some variable?
That's not a property of "basic" state machines, right? Their transition doesn't depend on the history, just on current state and perhaps an event. So it's not clear why you're bringing that up when you were asking about how to think of FP from a state machine point of view.
The example in Haskell is great and sheds light on how you can "model" state-machines in a functional language. My concern was the suggestion by the article that state-machines are somehow the basis of all computation. Turing machines intuitively look like machines with state. Functional programs not so much.
Here's a simple example which I'm not saying proves my point but I have a question about:
I have a program which calculates the average temperature of the year so far. You input today's temperature into it and it calculates and outputs the new updated average. If it is implemented as a functional program I can see how it can be given a time-series as input and it outputs the average.
But today is a new day and I have a new temperature measurement which I give to my program. If it really is something I could call a "state-machine" it should be able to calculate the new average for me. But to do that it would need to have all the previous inputs held somewhere in its "state".
Is there a functional program that can do that and give me the updated average temperature? Examples in JavaScript please if possible. Not everybody understands Haskell. Or is it something that can be done only in Haskell?
Sure, but I've run after the shifting goal post enough.
I want you to translate the above state machine to a Turing Machine, or demonstrate in any way how a TM is a tool or model which brings any kind of clarity or simplicity to any modelling, design, compilation or coding activity.
Functional programming is all about substitution as an evaluation mechanism. So you do substitution until it’s no longer possible. The final expression is thus your solution or how you can find your solution. And the final expression is equivalent to the initial one.
The state does not lies in the code, but in the set of variables. Once a specific combination of variable/values is known, so is the output.
When you do substitutions the state of your engine changes while you do them. The implementation is keeping track of your intermediate results by storing them in the state-machine that is your runtime. But to me this just suggests that you are using a state-machine to SIMULATE Lambda Calculus.
The point of FP to me is that you can hide the state from the programmer. As far as the programmer, or the reader of the source-code is concerned there is no state and no state-machine. Lambda Calculus is a specific "model of computation" which does not look like a "state-machine" to me. So calling it a "state-machine" is a bit misleading.
Yes you need a state-machine to "simulate" Lambda Calculus. But how it is implemented, the programmer does not need to know or care. All they see is functions and no state, at least no state they could directly modify.
The program state (if any) and runtime state are two separate things. And if you the programmer can not modify the state of the state-machine, then it is not a state-machine for you.
Check out section 2.2 where he compares different models to state machines and expresses how they can be mapped. In particular look at the BNF grammar one, where he talks about production rules.
In functional languages (especially pure ones) there is the notion of "reductions" which are rewrite rules for the language. So in Scheme (to pick a concrete language) an expression like:
(+ 5 2)
Is reduced by applying the addition operation:
(+ 5 2) => 7
Or (with a suitable definition of factorial to continue his example):
;; using his Program 3 in Scheme instead of C
(define (factorial i)
(if (= i 1)
1
(* i (factorial (- i 1)))))
(factorial 7) => (if (= 7 1) 1 (* 7 (factorial (- 7 1)))) ;; because we substitute 7 for i
=> (if #f 1 (* 7 (factorial (- 7 1)))) ;; reducing (= 7 1) => #f
=> (* 7 (factorial (- 7 1))) ;; reducing (if #f e_t e_f) => e_f
=> ...
Each of these arrived at by applying a reduction rule (in this case there's only one reduction rule available at each point).
This is analogous to his discussion of BNF grammars. The state transitions are moving from an expression e to an expression e' where some reduction rule has been applied to e.
EDIT: Grammar and comments describing application of reduction rules.
I can see that making a call to a sub-function and returning from, it to the calling function means the state of the execution has changed. But it means the state of the engine that evaluates the functions. As was said earlier the functions themselves do not have a state.
I don't think anybody would try to model a state-machine by making each state-change they want to model a call to a sub-function. The engine evaluating the function-expressions has its state but the functional program itself does not, it just has it code, arguments, and result. It can be in one of the states
1. Not Called
2. Calculating the result
3. The result has been calculated.
> 1. Not Called 2. Calculating the result 3. The result has been calculated.
That describes an oracle, not a purely functional program. A program describes a process (whether the program is purely functional or totally imperative). The state of the process can be captured in progress because, well, it's a process. There is a series of steps which are applied to produce the calculated result.
In purely functional programs, that would be the reduction rules. Each application of a reduction rule produces a new state (in essence the remaining computation to be performed).
No programmer-visible state. If you ask the programmer "What is the current state of your functional program?" they probably cannot answer can they? The functional programs have no state, the engine that runs the program of course has. But the engine that runs the program is not a functional program.
It's not that functions are themselves state machines, but functional languages are sometimes implemented as state machines. E.g. the lambda calculus can be implemented on an abstract machine known as the Krivine machine [1].
Right, what would be the states in it? What would be the state-transitions? How many are there? How does it model a physical system with multiple states and state-transitions?
Can a functional program return to its previous state ever? If not I don't think it should be called "state-machine".
It can calculate what should be the next state of some state-machine, but that does not mean it is that "state-machine".
...which implies that (when viewing a computation as a sequence of states [vertices] connected by transitions [edges]) the more functional one's program is, the larger the chunks are that one can effectively model as atomic transitions, and the shorter these state sequences are.
Reducing N may not qualitatively do much from the pedant's point of view, but if one's (formal or informal) analyses tend to scale as O(N*3), reducing N quantitatively may make a difference: between tractable (for formal analysis) or "fits in developers' heads" (for informal analysis) on the one hand, and a big ball of mud on the other.
The C abstract machine has nothing about stacks or program counters. There is the concept of sequencing: what is currently being evaluated, what has been evaluated and what is not yet evaluated. When a function is called, the calling function is suspended; then when the invoked function returns, the caller is resumed. From that we can infer that there must exist an activation chain.
Mainstream Lisps have all these abstract elements also. The main differences are in data, and environments what constitutes a value; what is a type; what is a variable; what is the lifetime of an object, and the scope of visibility of a binding, ...
Discusses the fundamental role of state machines in computer science.
In computer science, there's an excessive focus on languages, often obscuring the fact that languages all describe state machines.
The document intends to demonstrate how state machines express computation and clarify the concepts underpinning the various languages used to describe computations. It then delves into state machines' function within the sphere of 'correctness' in computer science, drawing on works related to Abstract State Machines by Yuri Gurevich and others.
Computers are general concrete state machines.
Computer programs behave by conducting a computer into a specific state machine behaviour.
So it can't be wrong to say that all computer programs describe state machines.
As others have said, I think a programming language must reduce to a state machine but I'm curious what your counter example is especially considering your conviction that it's not even close.
I think you’re correct practically speaking. The spirit of what you’re saying is true.
However technically, and pedantically, all programs must behave like state machines.
Btw it looks like it’s the author of the paper, not gpt4 making this confusing point so if one disagrees with the summary it’s a disagreement with the paper.
http://lamport.azurewebsites.net/tla/book-21-07-04.pdf