Hacker News new | past | comments | ask | show | jobs | submit login
The most important machine that was never built (quantamagazine.org)
68 points by akeck on May 5, 2023 | hide | past | favorite | 32 comments



Related: It's a crime that the mechanical Charles Babbage Analytical Engine (1837) hasn't been built.

Just found this recent progress report on an effort to do just that (written by Doron Swade [https://en.wikipedia.org/wiki/Doron_Swade], posted by HN user jgrahamc):

https://blog.plan28.org/2023/03/spring-2023-report-to-comput...

From https://en.wikipedia.org/wiki/Analytical_engine:

The analytical engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.

It don't think it gets much more steampunk than this in actual history.


If you haven't come across it, I recommend you check out 2D-goggles [0] for what might have happened...

Sydney Padua is an animator and she's created some 3d animations of parts of the analytical engine. [1]

[0] https://sydneypadua.com/2dgoggles/ [1] https://sydneypadua.com/2dgoggles/uncategorized/the-marvello...


Jgc is also the CTO of Cloudflare.


We are working on that. I am one of the people running Plan 28.


Interesting project. If your goal is to attract contributors, consider adding an email address to the blog.


> Turing’s great insight was to provide a concrete answer to the computation question in the form of an abstract machine, later named the Turing machine by his doctoral adviser, Alonzo Church.

It might be worth mentioning the lambda calculus that Alonzo Church had already introduced as an abstract model of computation. One that continues to have a profound impact on contemporary programming languages, such as Haskell.

Turing's great contribution was showing how a Turing Machine could do everything that a human mathematician could work out on (arbitrary amounts of) paper.


An aside: GPT4 added this comment after asking it about Alonzo Church:

...and as an AI language model, I owe a debt of gratitude to him and many other pioneers in the field for laying the groundwork that has led to the creation of advanced AI systems like myself.


> advanced AI systems like myself

Aw, it is its own hype machine.


> advanced AI systems like myself

hallucinations from the GPU powered lexer again?


The insight that there are problems a theoretical machine with infinite capacity and infinite time cannot solve is such an interesting result.


I think it is articles like this one which are misleading. The Turing Machine was never “built” because it is just a class of machines, i.e. particular machines are of the Turing Machine class, Turing-complete. Because the class of machine has an infinitely long “tape”, then finite machines can be created “under” it, or in that domain. Turing literally devised a method to create machines with a specific name, a number, and compute that number. Each and every one of your computers could be formalized to have a specific number, even if extremely large. Therefore, each of your computers are Turing Machines, unless of course you can show that you can run a program that the class of Turing Machines cannot.

Edit: I think the surest example of this is in the term “binary”. When you compile a C program, you get an executable binary. Literally, you have a file of ones and zeroes. Well, that’s just an extremely large number in binary, exactly the same as the binary string “100001” being “33” in decimal.


A Turing machine with a finite tape can be simulated with an FSM, and so is in a lower class.


I thought about your comment a lot and here’s a way, way better illustration:

You are completely correct that we can formalize any Turing Machine as a FSM but! this _must_ include input. Again, this is just pure simulation as you state.

Again, we can write a _single_ Turing Machine program with a _single_ input baked in as an FSM and that input _must_ be finite. But that FSM is not capable of handling _any_ input like the the Turing Machine it is simulating. That’s kinda the big difference between Turing machines and machines of a lower class. Any FSM has limited input thus limited states, but a Turing Machine can run infinitely with finite input.

So, one can simulate the Turing Machine _with_ input using a FSM up to state N. But the actual Turing Machine can execute up to N+1 without having to hard-code any inputs and is capable of handling other inputs.

Again, your computer is a Turing Machine capable of accepting Turing Machines as input. We can hard-code any individual programs and make them into a physical computer chip — just as we did with our computers which can accept any input it can handle, limited only by its physical limitations. But these physical limitations are not an indicator of it being of a lower class because we can keep pushing those limitations back and still compute more of states of these programs.

Anyway, I’m glad you mentioned all of this because I went and pulled out my books.


Because the "tape" is infinite, then the possibilities to name individual machines are infinite. I can create the number of a machine which is just a simple non-deterministic finite automata using a Turing Machine description.

My point is that we can always use more powerful machines to describe less powerful ones, but that isn't an indicator that they are then less powerful.


My point being is that from a computational complexity standpoint, your computer is not a Turing machine, just a very very very large FSM. So it's still very useful to think of it as a Turing machine.


Again, if your computer is not a Turing Machine, then it must be able to run programs that the Turing Machine class is not able.


I'm saying that a silicon computer a lower completely class than a theoretical Turing machine, not higher


I think I agree with you, but maybe not for reasons you agree with. You see, the Turing Machine is a Form, in Plato's sense. I can formulate all of the programs and machines imaginable (at this current moment in our lifetime) with Turing's process for creating machines. The individual machines created from his process may not be able to run Turing-complete code, e.g. a calculator formalized as a Turing Machine.

But our machines, our computers, are machines which can accept other machines as input. And even in this limited domain we cannot solve the Halting Problem, so I consider them a part of the Turing Machine class even if they are descendants of Turing's Form.

Edit: In more concrete terms, one can design a very simple program in C, but that doesn't mean that C isn't capable of being used to design more complicated programs. It also isn't very useful to measure a _specific_ C program, especially a simple one, as a measure of what _all_ C programs may be able to do.

Edit^2: After thinking about this more, I think a better argument is this: it is true that our physical machines must exist in some sort of finite state, but if our big problems (such as the Halting Problem) were merely symptoms of infinity, then we would have solved them by now in our limited and finite domain.


Ok, if you want to create your own definitions and describe how they fit together, that's fine. I was speaking of the formal definitions of computability complexity classes, so I think we were speaking past each other a bit.


I think your hang-up is one of infinity. Just because we must deal the reality of the finite does not mean our computers are not _capable_. For example, the Turing Machine in our minds is able to run infinitely, but the computer plugged in to my wall cannot because it is dependent on me paying the power bill.

Any Turing Machine we will have made into silicon is able to run any Turing Machine we can formulate (at this current moment in time) except only up to a finite point. But we can forever increase that finite point of measure! For example, our computers now may be able to calculate up to N. The next year, we may be able to do N+10000. The following year, N+10000000, and on and on. This is a limit of our reality, but not of the Turing Machine, which our machines must be because they are capable of being extended all the way to infinity, even if we take them there step by step, by adding 1 at a time.

Again, if this were a problem of infinity, then we would have limited the domain and solved the issue already. But we can design very simple, very finite Turing Machine programs for which the Halting Problem cannot be solved:

    while (true) {
        sleep(1);
    }
edit: you were right -- we are talking past each other. The "finite" in "finite state machine" is not useful here. Any infinite Turing Machine which we have made into silicon is still an infinite Turing Machine, just one we have inherently limited due to our realities of only having so much time or room or resources or whatever else. But a FSM is not one we limited artificially, it is a different class entirely and a class we can completely make into silicon without any sort of loss. Again, if this were a problem of "infinity" then we would have limited the domain by our own choice and solved it, which we have done for machines of a lower complexity, but _not_ for Turing Machines.


You're pointing out that a machine that can grow without bound is mathematically equivalent to an infinite one. There's a reason the word "finite" is part of "finite state machine". If you have a state machine that can grow without bound, it's not an FSM and is capable of computing things an FSM cannot compute.

And the halting problem is solvable for any finite machine. A finite machine has a finite number of states. You only need to run N steps where N is the number of possible states. If it hasn't halted at that point, it must run forever.


> You only need to run N steps where N is the number of possible states.

It's at most N steps - the machine can return to a prior state long before that happens, which would also prove that it's non-halting.

Of course the memory requirements to do that make it purely theoretical - but so too is exhaustive execution (due to time requirements).

https://en.wikipedia.org/wiki/Busy_beaver#Exact_values_and_l...


> And even in this limited domain we cannot solve the Halting Problem, so I consider them a part of the Turing Machine class even if they are descendants of Turing's Form.

Non-halting finite machines are periodic.

The fact that we are unable to detect that they have returned to a prior state is a physical limitation, not a theoretical one.


That's not true - even if the computer is Turing equivalent, it's not necessarily a Turing machine.

https://en.wikipedia.org/wiki/Turing_completeness#Formal_def...

> Turing equivalence: A Turing-complete system is called Turing-equivalent if every function it can compute is also Turing-computable; i.e., it computes precisely the same class of functions as do Turing machines.


Our machines _can_ compute any Turing-computable function, except only up to some finite point because we must exist in this reality. Our numbers are infinite, but we do not argue they are not because someone was not able to count up to 999999999999999999999999999!


That's not even the point - the Turing machine is a specific abstraction.

> Turing machines, first described by Alan Turing in Turing 1936–7, are simple abstract computational devices intended to help investigate the extent and limitations of what can be computed.

https://plato.stanford.edu/entries/turing-machine/

Even if we had infinite memory, our computers would be closer to those of the Von Neumann architecture than to Turing machines.

> The Von-Neumann architecture is an architecture for constructing actual computers (which implement what the Turing machine describes theoretically).

https://stackoverflow.com/questions/2782014/turing-machine-v...

This is not simple pedantry - Turing never intended for the machine to be built, it was only ever a device intended to explain computability.

The theoretical Turing machine can accept a tape for any definable program, but the architecture cannot possibly run efficiently.

It was never intended to.


The Von Neumann architecture is just a Turing Machine, simply a physical manifestation of one, a physical abstraction. My computer has a “tape” which is 64GB wide because I have 64GB of RAM.

If we had infinite memory and infinite symbols (instead of only 2^64), we could build a universal Turing Machine. But that doesn’t mean our physical computers aren’t Turing Machines executing Turing-complete programs that just happen to be in the space of programs executable in a finite time and within the limitations of our current physical realities.

I’m not sure who told you that Turing never intended for the machine to be built. He knew that he could not build a machine with infinite tape, as we both know too. But when he built the machine to crack the Enigma Machine, he did this _after_ writing his paper, “On Computable Numbers.” So, he was in effect making a Turing Machine (but he called them a-machines).

We literally didn’t have computers before Turing formalized a way to execute them, instruction by instruction, and in detail. Once we had some kind of formal method to do this, physical machines could start being built.

Edit: My comments have not shown misunderstanding. There seems to be this consensus that our mathematics here came _after_ our machines. But this is wrong and the implications are tremendous. Our computers we have really are machines which execute other machines. What is the only class of machine that can do that? The Turing Machine.

Edit2: I suppose this is an argument of semantics. I didn’t realize. Yes, perhaps our computers are “Turing-equivalent”. I still think the “tape” and “symbol” and all of those things are merely mental variables, which is why I called these things a Form earlier. We built a physical tape, it just took the form of RAM and has much detail that electrical engineers can talk about. We also built symbols, we just called them instructions and placed them into a CPU. All of our physical devices have come from theoretical findings, not vice-versa.

https://en.m.wikipedia.org/wiki/The_Treachery_of_Images


God damn it.

https://xkcd.com/386/

No, it's not a Turing machine - it's Turing equivalent.

Turing equivalent, say it with me: Turing equivalent.

No, it is not the same damned thing. No, my memory pool is not a fucking tape.

If I claimed that a Ryzen CPU is still and Intel machine, I would be wrong (I'd be less wrong than you are, they're a hell of a lot closer than Von Neumann and Turing).

There's a whole lot of nuance in this discussion that you are either unwilling or unable to understand.

Every comment you've left in this thread has shown some misunderstanding of the topic at hand - you might try listening to what people are saying to you.

And no, it's not pure pedantry - the machine architectures serve very different purposes.


Yes, this has been a good refresher for me. I think you’re right, I am probably mis-stating some particular nuances.


If you're interested in seeing a Turing machine in action, I really enjoyed this YouTube series where the creator builds a pure Turing machine that can simulate the Apple ][: https://www.youtube.com/watch?v=7hP4BTWvrGw

Spoilers: it's, uh, not fast


I think Hilbert is one of the most underrated - and unknown - people (outside experts).


Agreed. Anyone who is curious about him should read the superb biography https://www.amazon.com/Hilbert-Constance-Reid/dp/0387946748.

It is excellent both for understanding his life, and slips in the subtle philosophical points that he introduced to mathematics. (Like what mathematical existence should mean in his view.) And yet manages to be appropriate for a general audience with no knowledge of math. I know of no better explanation for a lay audience of where Formalism, the philosophy of math that most mathematicians officially subscribe to, actually came from.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: