They do use the word function, just not in terms of the language itself, but in the same sense of saying that a falling apple integrates the function from its mass to acceleration; it's a human interpretation of the encoding. I would state the Church-Turing thesis like so: a universal computation model can compute anything any physical machine can. As Hodges says in one of the links in the article, this is how Church and Turing themselves understood it. How would I define "anything"? As a function on any countable set -- the natural numbers, would do -- and I would explain what it mean to compute a function: pick a function, pick an input element, pick a finite representation for the input and output; if a physical machine can compute the output representation from the input representation, so can a universal computation model.I could choose a function between any two countable sets, e.g. (Nat -> Nat) -> (Nat -> Nat): pick a representation for the input and output and all models can compute the same functions, and those are the functions that machines can compute. They differ in abilities regarding representation within the model, but as the language models' representation require very significant computational work, I'd expect their representation to do more: they're paying for it.A C compiler works hard, so you'd expect it to provide you with a representation more suitable to your wishes than machine code. It's just that as a compiler like C translates a "meaningful" language to a machine encoding, everybody notices the work. The language "computation" models work in a vacuum, so I just pointed out that even though they don't translate anything from one representation to another, they still have to pay for their natural representation.

 a function between any two countable sets, e.g. (Nat -> Nat) -> (Nat -> Nat): pick a representation for the input and outputThere isn't a TM-suitable representation for things of type (Nat -> Nat). No matter how you try to encode a Nat->Nat as a Nat (or a bitstring, if you prefer), you're going to screw up equality (which means you're actually encoding some other type). You'll map two unequal functions to the same Nat, or (more likely) map one function to multiple unequal Nats. This is the difference between how lambda calculi and Turing machines handle higher-order things: A Turing machine can always inspect the "internals," so you need an actual bijection from functions to strings, whereas lambda calculus hides the internals, so it's safe to map one function to multiple syntactic representations.
 If you take the normal mathematical definition of a function as a mapping from one set to another, pick two sets, pick a representation and all models can compute the same mapped elements. Tell me what your (higher-order) function is — i.e., what output element it maps to what input element — and a TM would compute the exact same outputs. That it can’t internally “reason” about those functions is obvious, but, as I showed, that ability comes at a cost. You could build, say, a Haskell compiler that would pay that cost and translate the language to a machine representation (or, speaking more abstractly, a TM could simulate, say, System Fω — at a computational cost), or you can use an abstract language model without any underlying machine, but the same cost must be paid regardless.This shows that the concept of a type requires actual computational work rather than just a matter of “free” perspective. Your mere presentation of a function as an inhabitant of a type creates a computationally harder problem than presenting a function as a mapping between sets. Put differently, when a typed language model computes a function (Nat -> Nat) -> Nat, it does more work, and solves a harder problem than necessary to compute that function in the normal set-theory sense. You can choose to solve that problem on a TM by simulating type checking if you like, but you can’t compare “compute a function in the set-theory sense” and “compute a function in the type theory sense”, because those are two different problems with two radically different computational complexity costs. You can decide to do that extra work or not (you may want to because you gain benefit from it), but you can’t compare a model that always does — and pays handsomely for it — with one that doesn’t.Otherwise, I could create a language model that solves all NP problems in polynomial time by defining my language as requiring a proof certificate with every input problem, which my model would then check in polynomial time. No one would believe my language actually really solves a computational problem: it simply pushes the difficulty to the collaborator.Equality is also an internal question. Define any relation between two sets (your definition of equality), and all models would be equally able (or unable) to decide it.
 > and solves a harder problem than necessary to compute that function in the normal set-theory sensePeople who use set theory as their foundations impose on themselves the even harder task of making sure their mappings are coherent w.r.t. whatever notion of structure they want to work with, i.e., homomorphisms in a category of interest.> Otherwise, I could create a language model that solves all NP problems in polynomial time by defining my language as requiring a proof certificate with every input problemNo, this is wrong. Changing the input type means that you are not, in fact, solving the original problem.
 > People who use set theory as their foundations...Let's not get into this old debate because we'll never get out of it. Suffice it to say that it that there is no doubt that Turing and Church (and Gödel and von Neumann) used set theory as their foundation, and when they said "function" they meant a set-theoretic function, namely a mapping from a domain (set) to a range/co-domain (set). If by "function" you mean a type-theoretic function then you're talking about a computationally harder problem, as you need to compute the underlying set-theoretic function and check a proof. You can certainly ask a TM to simulate a type checker and compute a type-theoretic function. There's actual work involved, and someone must do it. You cannot say that your formalism gives it to you for free, or you'll have a formalism that proves things for free, which is impossible.> Changing the input type means that you are not, in fact, solving the original problem.Exactly! But that is precisely what you do when you interpret "function" to mean a "type-theoretic function", and solve a different problem. Any work that needs to be done under the normal meaning of "function" also needs to be done in the typed formalism. If you need to ensure/prove something, there is work and computational complexity involved. Pushing it to a collaborator doesn't make it disappear (let's call this the law of preservation of computational complexity). Someone has to do it, and in no model does it suddenly become free. It's perfectly OK to say that you start counting after much of the hard work is done, but in that case you need to say that your model requires upfront work, and therefore cannot be compared to models that don't. That's my whole point.
 Of course, proving things, that is, doing mathematics requires work. But at least it's work that produces a useful result, unlike carrying out computations without paying attention to their meaning. (By the way, this meaning always exists, even if you don't particularly care about it.) And the best moment to study the meaning of a computation, in the sense of minimizing the amount of long-term work, is as early as possible, which is exactly what types force you to do.
 But this is now a programming discussion, not a computation discussion. Also, I don't think it's right to declare things "useful" only if they're useful to the particular problems you're trying to solve.BTW, any logical formalism forces you to pay attention to meaning; choosing to encode the logic in types has advantages and disadvantages, but it is not at all the only way to logically describe a program.
 > Also, I don't think it's right to declare things "useful" only if they're useful to the particular problems you're trying to solve.Pray tell, what's the use of a computation without paying attention to its meaning? My use case is simply “knowing the meaning as early as possible”, and there ought to be no discussion that this should be everyone else's use case as well. Nobody performs a computation without some sort of expectation about the relation between the inputs and the outputs.> BTW, any logical formalism forces you to pay attention to meaningI never said types are the only way. For instance, you could use Hoare logic, but it's notoriously more difficult to use, precisely because it gives programs no a priori meaning.
 > Pray tell, what's the use of a computation without paying attention to its meaning?First, I don't think that the only way to assign meaning to computation is by formally assigning meaning to the syntactic components that comprise its program, as written in some human-readable language. Most of the relevant meaning is in human interpretation of the input and output or the sensors and actuators. Components' meaning may be stated informally. I am not aware that formal meaning is the only valid meaning. People wrote the software that landed men on the moon in assembly; that was pretty meaningful. I believe this is still the prevailing view in computer science.Second, you keep identifying the notion of computation with the notion of a program written by a person. The theory of computation, from its earliest days -- by Turing himself -- has studied computational phenomena in neural networks, genetics (Turing was a pioneer of both NNs and genetic algorithms) and quantum phenomena. Some of the greatest achievements in CS are not specifically related to software (I count at least 20% of all Turing awards).> For instance, you could use Hoare logic, but it's notoriously more difficult to use, precisely because it gives programs no a priori meaning.Or TLA+, which makes types notoriously more difficult to use by comparison.
 > First, I don't think that the only way to assign meaning to computation is by formally assigning meaning to the syntactic components that comprise its program, as written in some human-readable language.Please suggest alternatives - that actually work.> Most of the relevant meaning is in human interpretation of the input and output or the sensors and actuators.The relation between the input and the output. This can only be stated formally.> Components' meaning may be stated informally.That leads to disaster very quickly.> I am not aware that formal meaning is the only valid meaning.Well, I am. Too much nonsense has been said as a result of not formalizing things.> Second, you keep identifying the notion of computation with the notion of a program written by a person.No. Even if a computation “arises in nature” (which is just your way of looking at things, actual physical phenomena is what you can measure, plain and simple), the only way to make sense of it is to write a program that reproduces the computation. This is how scientific theories work.> Or TLA+, which makes types notoriously more difficult to use by comparison.Some type systems are more difficult to use than others, and the type theory community is definitely guilty of not paying enough attention to usability, but I don't see how model checking has a higher power/cost ratio than sensibly designed type systems. Which tool satisfies the following criteria?(0) Compositionality: you can understand a large system by studying its parts in isolation.(1) Handles higher-order constructs (first-class functions, objects) gracefully.(2) Stops you from saying nonsense as early as possible.I rest my case.
 > Please suggest alternatives - that actually work.AFAIK, the only languages that fully express the computational meaning formally with types are the dependently typed ones. AFAIK, there has been exactly one non-trivial real-world program written in such a language, and its author wasn't too pleased with the process (I'm referring to CompCert and Xavier Leroy). Every other program in every other languages has most of its meaning expressed informally, in the mind of the programmer. Unless you believe that no programs work, then nearly all programs that do express most of their meaning informally.> Too much nonsense has been said as a result of not formalizing things.Formalizing things doesn't imbue them with good sense. You can state plenty of nonsense formally. The only thing you're guaranteed is that it's consistent nonsense, and, as someone said on one of my favorite TV shows, "The sole virtue of the ineffectual is consistency".> Which tool satisfies the following criteria? ...TLA+, and far more elegantly and simply than any typed language you've seen.
 > AFAIK, the only languages that fully express the com meaning formally with types are the dependently typed ones.I said “formally”, not “with types”. I firmly believe in using the right tool for the job, and some things are best handled with manual proofs. As helpful as types might be for computer-generated proofs (type inference), I'd rather carry out my manual proofs using good old-fashioned predicate logic. Some proof techniques still need to be backed up by type structure, though (e.g., induction over datatypes).> Every other program in every other languages has most of its meaning expressed informally, in the mind of the programmer.Unfortunately, I can't read minds. I can only read proofs that have been explicitly written down.> Unless you believe that no programs work, then nearly all programs that do express most of their meaning informally.My definition of “work” is “works in all cases”. No bugs. No unforeseen cases.
 So you're specifying an empty set. There has never been a large program that's been completely formally proven -- not a single one -- and there are therefore no methods that work. The small programs that have been mostly proven, took such a great effort, that their creators wouldn't say that the approach "works" in any widely applicable way.
 Yep, that's the state of the art. It means that more human work has to be done on making it easier to write programs that work.
 > > Which tool satisfies the following criteria? ...> TLA+, and far more elegantly and simply than any typed language you've seen.Still eagerly awaiting your elegant and simple explanation of this! It sounds groundbreaking (genuinely).
 Well, the 1994 paper does have about 2500 citations:https://www.microsoft.com/en-us/research/wp-content/uploads/...and an older paper covering an interesting theoretical property has over 1000. Also, the author did get a Turing award -- not primarily for this, but it got a mention. So yeah, we're talking about a very well-known formalism in software verification, and people were impressed. I don't know about groundbreaking because the community is different. In software verification you get points for new proof methods and such; not necessarily for creating an elegant language. TLA+ is about finding the ideas that have been shown to work at scale, and creating a very elegant, minimalist language to express them, based on simple ordinary math as much as possible.There is no shortage of good tutorials (especially considering how simple TLA+ so there's no need for 500 tutorials on monads), and the language was designed for and is used by engineers in industry working on large, complex systems. Companies that have started to use TLA+ (or use it extensively) on large systems -- Amazon, Oracle and Microsoft -- report that their engineers pick it up on their own from available materials in two weeks (that was my experience, too; it's easier than most programming languages). But those are tutorials and don't focus on the theory. Amazon published two excellent technical reports, one in CACM, about their experience, with pretty good data (that's what convinced me; I'm impressed with what's been shown to work in the field).There is also no lack in more academic, theoretical material, but, as usual, that is mostly concerned technical details.What's missing is a brief overview of the theory. I'd read at least the introduction to the original paper above, this 4-page historical note[1], and this[2].
 > What doesn't work is a specification language that's also a programming language.Absolutely, and thus types and specification via TLA+ are not in opposition. In fact they can work in harmony.> someone who's familiar with programming languages may view specification languages as groundbreakingThat's not what I find groundbreaking. What I find groundbreaking is that TLA+ is (supposedly) so composable despite being so simple. I remain skeptical. Indeed, even the fundamental TLA+ notion of "priming" variables seems that it would immediately be an impediment to composability.> There's a result (mentioned in my Curry On talk/post) that simply put states that correctness doesn't compose.Yes, I recall that. I believe the theoretical result, of course, but I am yet to be convinced of its practical relevence. Correctness does compose in my experience as a mathematician and as a software developer. In fact, I can't see any way that correctness can be achieved except through compositionality.> A TM with 6 control states can have 10^35,000 states or soPlease explain your terminology. Surely it can have infinitely many states? Indeed a Turing machine with 1 control state can have infinitely many states.> states in sub components simply cannot be hiddenI find this a very puzzling statement. States in subcomponents can be easily hidden in every common-or-garden programming language.> You write your assumptions, and hope to get verification for your claims.Sure, but as we've observed, it doesn't actually tell you anything about the final implementation you come to write.> The problem here may still be that you're thinking of composability in terms of functional languages instead of composability of abstract programs.I'm thinking of composability in terms of building things out of smaller things and being able to deduce properties of the things based on the properties of the smaller things. That notion precisely captures "composability" (even "compositionality", if we want to get formal) as far as I am concerned. What does it mean to you?> The book won't tell you that what you know as parameterization is just another form of many of nondeterminismOK, but you could. What exactly does that mean?