Hacker News new | past | comments | ask | show | jobs | submit login
Turing Oversold? (idsia.ch)
354 points by anielsen 5 days ago | hide | past | favorite | 303 comments





We've been over this. Gödel's mu-recursive functions were a poor model of computation because it's completely unclear how to physically implement the arbitrary-function minimization operator. So people didn't see how to build a machine that calculates this way. Similarly, there's no clear way how to mechanize lambda calculus.

Turing Machines, on the other hand, were instantly obviously mechanizable. It was clear that one could build a physical machine to run any Turing program without human input. By proving that they could simulate the other systems, Turing showed that those other systems could be mechanizable as well.

I don't understand why Schmidhuber continues to ignore this crucial point.


Jürgen Schmidhuber has a track record of complaining about unfairness in the conventionally accepted view of the scientific record, especially in the English-speaking world. For instance, he has claimed for years that his own early research on AI has been unfairly overlooked or ignored by a "conspiracy" (his word, not mine).[a] At one point, the NY Times called him the "Rodney Dangerfield of AI research" because he doesn't get the respect he deserves.[b]

Ultimately, as always, credit will be assigned by future generations of scientists and historians long after everyone who is alive today has disappeared from the face of the earth. Many disagreements that seem important now will be forgotten. Many findings that seem important now will turn out not be. Many names that are very prominent now will fade into obscurity. As always.

[a] https://people.idsia.ch//~juergen/deep-learning-conspiracy.h...

[b] https://www.nytimes.com/2016/11/27/technology/artificial-int...



>Jürgen Schmidhuber has a track record of complaining about unfairness in the conventionally accepted view of the scientific record, especially in the English-speaking world.

So, he has

>Ultimately, as always, credit will be assigned by future generations of scientists and historians long after everyone who is alive today has disappeared from the face of the earth.

Well, credit for the field of CS is already applied now, and also yesterday and 20 years ago. It's not like this future credit would be the first such, or that it would be more accurat (or that it would not feed of the current popular views).


> But the iniquity of oblivion blindly scattereth her poppy, and deals with the memory of men without distinction to merit of perpetuity... Herostratus lives that burnt the Temple of Diana, he is almost lost that built it... Who knows whether the best of men be known? Or whether there be not more remarkable persons forgot, than any that stand remembered in the known account of time?

~ Sir Thomas Browne, Hydriotaphia (1658)


Likewise, people dispute that Ada Lovelace was the first programmer, because Babbage and Menabrea had previously created a few simple example programs.

But that downplays her accomplishments too much. She didn't write the "first program" but she was the first to understand what computers would be capable of doing (for example, that by assigning numbers to letters and symbols, computers could do more than simply perform numerical computations), and she was the first to invent foundational control flow structures such as loops and conditionals. Her program was much more rigorously defined and sophisticated than any previous examples.

>The longest program that Menabrea presented was 11 operations long and contained no loops or branches; Lovelace’s program contains 25 operations and a nested loop (and thus branching).

https://twobithistory.org/2018/08/18/ada-lovelace-note-g.htm...

https://writings.stephenwolfram.com/2015/12/untangling-the-t...

https://projectlovelace.net/static_prod/img/Diagram_for_the_...


I really enjoyed Stephen Wolfram's mini-bio of her.

https://writings.stephenwolfram.com/2015/12/untangling-the-t...

I very much recoginized from that that she had the attitude and experience of a "programmer," so I would say she was the first programmer, in the modern sense.


Wow, thanks for the link. Really interesting story, fascinating to think about what could have been if she hadn't died so young.

"It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable.

The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. This it is calculated to effect primarily and chiefly of course, through its executive faculties; but it is likely to exert an indirect and reciprocal influence on science itself in another manner. For, in so distributing and combining the truths and the formulæ of analysis, that they may become most easily and rapidly amenable to the mechanical combinations of the engine, the relations and the nature of many subjects in that science are necessarily thrown into new lights, and more profoundly investigated. This is a decidedly indirect, and a somewhat speculative, consequence of such an invention. It is however pretty evident, on general principles, that in devising for mathematical truths a new form in which to record and throw themselves out for actual use, views are likely to be induced, which should again react on the more theoretical phase of the subject. There are in all extensions of human power, or additions to human knowledge, various collateral influences, besides the main and primary object attained." -- Ada Lovelace, 1842 http://www.fourmilab.ch/babbage/sketch.html


Ada Lovelace's father was Lord Byron?! TIL

Sadly for her she never knew him. The parents split up, he left England and she stayed with her mother. Byron died when she was just eight years old.

Look for Sydney Padua's comics for a lot of weird and strange facts about Lovelace and Babbage. (To be fair, Babbage was much weirder)

I was surprised to learn that as well.

I learned about it in Walter Isaacson's Innovators.


> because Babbage and Menabrea had previously created a few simple example programs.

That almost sounds to me like saying that the Wright brothers "made a few simple flights".

> first to invent foundational control flow structures such as loops

I wonder how sigma notation fits into this. Clearly the notion of expressing arbitrarily repeated operations using a fixed amount of information (which is what a loop is, essentially) was known at least to Euler.

Also, the fact that the machine enabled these things in the first place (unlike even some of the later machines such as Z3) suggests that its designer was either aware of this necessity to begin with, or at the very least in possession of preternatural prescience. In that case the use of these features in some programs but not in others would be not a matter of inventing them in the former programs but instead a matter of choosing to exploit existing hardware features, or declining to do so, depending on what program you're looking at.


> I wonder how sigma notation fits into this. Clearly the notion of expressing arbitrarily repeated operations using a fixed amount of information (which is what a loop is, essentially) was known at least to Euler.

You can even go further back. Algorithms with loops were known already to Babylonian mathematicians. So you don't need to resort to preternatural prescience.

The Z3 was not intended as a general computing device but as a practical help for engineers. Because of that you can't say it was missing something it didn't need to do its job. Whereas when Zuse designed Plankalkül loops and conditional branches where naturally included in the design.


> That almost sounds to me like saying that the Wright brothers "made a few simple flights".

Richard Pearse gets written off in the same way to elevate the Wright brothers flying accomplishments.

Pearse was just perusing powered flight as hobby in rural New Zealand, didn't bother informing the press and didn't bother even telling the government until WWII, 40 years later, about his flights and engineering designs.

https://en.wikipedia.org/wiki/Richard_Pearse


Not sure what improvement Pearse made over, say, Ader.

Pearse archived semi-controlled flight 300 cm above the ground verses Ader's uncontrolled ground effect flight 20cm above the ground.

I have no idea what is "a semi-controlled flight". Is it like "semi-riding a bike"?

Since the notes attributed to Lovelace were written as part of some scribe work she was doing for Babbage, what indication is there that the notes are her own original work, and not something that Babbage asked her to write? Don't get me wrong, she was clearly a very intelligent woman.

People that say Ada was the first programmer must think Babbage came up with the first general purpose computer then never wrote any instructions for it.

Maybe the first programmer who wasn't also a hardware engineer.


Defining the first person to do anything is almost futile. No one exists in a vacuum and most first were standing on the shoulders of technological accomplishments far outside of their own field.

That said, I'm sure in the case of Ada Lovelace there is at least some element of my misogyny involved.


Awkward typo.

>She didn't write the "first program" but she was the first to understand what computers would be capable of doing

Or merely the first to express it? I'm pretty sure Babbage himself, as the inventor, understood well what computers would be capable of doing.


He was focused on using his machines to efficiently generate mathematical tables. It was Ada who realized the potential of the analytical engine as a universal computer. She even wrote that given a good numeric representation for sound, one could program the analytical engine to generate algorithmic music based on mathematical rules. Babbage himself wrote examples of programs for the engine, but they were all very simple examples of numerical calculations that would be applicable for generating mathematical tables.

Well, the often inaccurate title of "first programmer", which clearly Charles Babbage was for at least his machine. Perhaps it would be better to describe Ada Lovelace as an early enthusiast / evangelist / adopter.

I'm not sure that inventors always understand the consequences of their inventions. Often, they are either focused on first-order capabilities and neglect the larger significance; or focused on visions of the future but unable to turn them into useful products in the short term.

Jeremy Campbell's The Grammatical Man would like a word with you on Lovelace actual documented contributions.

> I don't understand why Schmidhuber continues to ignore this crucial point.

From TFA:

> There is a seemingly minor difference whose significance emerged only later. Many of Gödel's instruction sequences were series of multiplications of number-coded storage contents by integers. Gödel did not care that the computational complexity of such multiplications tends to increase with storage size. Similarly, Church also ignored the context-dependent spatio-temporal complexity of the basic instructions in his algorithms. Turing and Post, however, adopted a traditional, reductionist, minimalist, binary view of computing. Their machine models permitted only very simple elementary binary instructions with constant complexity, like the early binary machine model of Leibniz (1679)[L79][LA14][HO66] and Zuse's 1936 patent application...

I presume by "seemingly minor" Schmidhuber implies "it turns out to be very important".


Not to dispute your point, but note the lambda-calculus-as-a-reasonable-machine papers from the last couple of years: it turns out (despite the seeming general understanding to the contrary in the past) that polynomial interpreters for some meanings of “lambda calculus” (including IIRC a weird very general one, call-by-need on open terms) are perfectly possible, meaning that many fundamental questions of computational complexity can be straightforwardly expressed in terms of the lambda-calculus machine as well. Linear-time interpretation (thus e.g. classes L and NL) seemed out of reach last I checked, though.

To echo my other comment here, it’s not that Church himself knew that. Even five years ago people did not know that. It’s not a question of priority. But I find it fascinating and reassuring nevertheless, as actually programming e.g. a self-interpreter for a Turing machine—or for general recursive functions, or for combinatory calculus—is an absolute slog.


Can you give some links to these recent papers? Sounds interesting. Thanks!

Didn’t have access to my archive at the time, so got some of the details wrong it seems (e.g. CbV not CbN, result for time is older than I remembered). Main thrust should remain valid, but be careful. In any case, here you go:

Dal Lago, Martini (2008). “The weak lambda calculus as a reasonable machine”. DOI:10.1016/j.tcs.2008.01.044, apparently not on arXiv.

Accattoli (2012). “A fresh look at the lambda-calculus”. DOI: 10.4230/LIPIcs.FSCD.2019.1, apparently not on arXiv.

Accattoli, Dal Lago (2014). “Beta reduction is invariant, indeed”. DOI: 10.1145/2603088.2603105, arXiv: 1405.3311 [cs.LO].

Accattoli, Dal Lago (2016). “(Leftmost-outermost) beta reduction is invariant, indeed”. DOI: 10.2168/LMCS-12(1:4)2016, arXiv: 1601.01233 [cs.PL].

Forster, Kunze, Roth (2020). “The weak call-by-value lambda-calculus is reasonable for both time and space”. DOI: 10.1145/3371095, arXiv: 1902.07515 [cs.CC].

Now that I’m looking through bibliographies, there are apparently other relevant intervening papers in the vicinity, even by some of the same authors, but these are what I’ve looked at personally.

Bonus: the paper

Hackett, Hutton (2019). “Call-by-need is clairvoyant call-by-value”. DOI: 10.1145/3341718, apparently not on arXiv.

is unrelated to questions of complexity but is just so absolutely lovely.


It is important for engineering, but as far as I understand it is not that important for math. E.g. Gödel solved the completeness and consistency problems, and Church first solved the decidability.

The OP itself documents:

> Nevertheless, according to Wang,[WA74-96] it was Turing's work (1936) that convinced Gödel of the universality of both his own approach (1931-34) and Church's (1935).

Unless "according to Wang" is meant as "I don't know if I believe it", then apparently it's documented that Godel himself thought Turing's contribution was major and shed important light on the mathematical implications of godel's own work.

There's never any one person that invents anything, it's always built on work that came before.

Reading the OP, I got increasingly bored and tired... ok, what's your point? Yes, clearly Godel and Church especially did foundational work without which Turing's work would not be possible -- and I don't think anyone denies it, anyone with any kind of computer science education is surely familiar with Godel and Church. It's not like they are languishing in obscurity, they are both very well known and respected! Godel especially is widely considered a real giant in his fields. I am as confident that neither Godel nor Church is going to be forgotten for many generations.

But Turing made important contributions that took important steps necessary for the development of CS as a field. It's not a mystery or undeserved why Turing's name ends up remembered.

The OP's point is just that any singular "inventor" is always building on work of may who have come before? OK, sure. Boring. So we should never promote anyone as making important foundational contributions? Well, people aren't gonna stop. Boring.


It's also the case that Church only knew that the notion of function defined by the lambda calculus was coherent once the Church-Rosser property was proven, and that was not published before Turing submitted his article.

> I don't understand why Schmidhuber continues to ignore this crucial point.

It is difficult to get a man to understand something when his salary depends upon his not understanding it.

But wait a minute, you might say, facts are facts.

And if everyone had the time and resources to discover and digest every fact, your understanding would be the end of it.

But everyone doesn't have time and resources. To compensate, we rely on others to curate facts for us. When we encounter an internally consistent subset of facts that suits our ideals and our interests, we adopt that point of view.

There are infinitely many subsets of curated facts that can be presented as internally consistent. That's why there are so many different points of view.

What bearing does this have on Turing's role in computer science, and his latter day fame in a society which came to be defined by silicone logic?

The First Computers Were Human (and Mostly Women)

https://mjosefweber.medium.com/the-first-computers-were-huma...

Turing, in addition to stating an ontology of computing, dared to invite the question, what is the difference between a computer and a human?


>https://mjosefweber.medium.com/the-first-computers-were-huma...

I cringe when I read ill-researched essays like these because it informs a relationship between women and computing that back then genuinely did not exist.

For the vast, vast majority of women in the field of computing at this time, they were nothing more than glorified calculators. Yes, there were a few women scientists and mathematicians (by few, I mean literally handfuls). Yes, it was a male dominated field.

But the overwhelming majority of women working in this industry at this time did secretarial style busywork. It wasn't glorious. It was underappreciated. It sucked.

These types of essays are a genuine attempt to rewrite a history that did not exist. It is literary gaslighting the likes of which the article we are discussing right now is attempting to rectify.


> These types of essays are a genuine attempt to rewrite a history that did not exist.

I'm not sure what you are saying? Could you clarify?

What is "secretarial style busywork"?

Is this article also "ill-researched"?

https://www.smithsonianmag.com/science-nature/history-human-...

What would a better researched view of history say?

I'm not trying to troll you or make you write an essay here - just trying to glean the main points of what you believe so I can see your point of view.

Are you familiar with the view that the "Turing Test" was first conceived by Turing as a challenge to discern the gender of a human hidden behind a screen?

In my comment I was trying to point to Turing's interests in the nature of gender and humanity. I find that expansive and prescient in the sense that "computing" to us moderns is more about NLP than calculating maths, and yet, it's all the same thing under the hood.


Computers we use are nothing like Turing machine and if you want to credit one person for computer design than it would be van Neumann. Media needs hero worship. It doesn't matter who will be hero but they must have one. This applies to every field. Every scientist, enterprenuer, artist is overblown. Older they are, more overblown they are. The worse case is movie actors who literally everyone knows are not "hero" but just the human puppets who move and behaves as how directors asks and writers dictact, but in the eye of people they are literally called "hero" and "star". The people moving these puppets are only mentioned in credits who no one reads.

Best I can tell, von Neumann stole the design that bears his name from John Mauchly and J. Presper Eckert. von Neumann doesn't seem to deserve as much much credit as he is given. I'm not saying he does not deserve credit for his other work.

Both you and parent are correct: Von Neumann knew how to play the game to end up with "the credits people do read".

And Schönfinkel’s combinatory logic ten years earlier (we are talking priority here, right?) was even more awkward.

There’s also the point (mentioned e.g. by Wadler in his “Propositions as types” talk) that Gödel didn’t actually realize how pervasive universal computation was and actively pushed back against Turing proclaining the equivalence. This is not to accuse him—he wasn’t particularly obstinate or unfair about it and, furthermore, came to understand the idea fairly quickly. But it’s not at all uncommon in science for the one who actually invented a thing to fail to realize what it was that they just invented, and somebody else comes years or in some unfortunate cases decades later and announces the founding of a new area of knowledge on that already-discovered site. Whom the later naming will associate with the discovery is a toss-up, but it’s generally fair and in good taste, I think, to mention both.


As someone said to me recently, Schmidhuber may have a few points about lack of credit regarding his work but him having invented everything to do with modern NNs is a wild claim and regardless he has cemented himself as a boor and continues to double down. This is just another example of that doubling down.

"For example, it was claimed that Turing founded computer science.[...] Turing's 1936 paper provided the "theoretical backbone" for all computers to come."

So your argument is, because it is unclear how to "physically implement the arbitrary-function minimization operator", Turing is the better "theoretical backbone" and has founded computer science?


The question in the air in 36-ish was something like, "OK, clearly we can mechanically compute things like the sum of two numbers or the prime factorization of a number. But are there other things that can be computed with a discrete and deterministic mechanism?" (At the time they called these "effective" functions.)

Church had piles of stuff that he and his students produced that were computable with the lambda calculus. Basically, all of the natural number functions that a person thinks are intuitively mechanically computable, those folks had showed how to lambda compute. With this evidence, he proposed to Godel (they were working together at Princeton at the time), who was considered the world's expert, taking "lambda-calculable" as a mathematically precise version of "mechanically computable." But Godel was notoriously careful, and he did not accept the thought as perfectly clear.

That is, they had a subset of the things that could be mechanically computed. But was it the entire set? Or was there something that some discrete and deterministic mechanism could be made to do that would lead to more than Church's set?

Imagine you are Dedekind and you are looking at the primitive recursive functions and (1) any such function is intuitively mechanically computable, and (2) you are able to work out how to define a pile of things like prime factorization of an integer using this system. You might well conjucture that this is it. But we know (and Godel and Church and Turing knew) that this is not it, that you need to add unbounded search of some kind (this is what minimization does) to get more things that are intuitively mechanically computable.

I agree that the minimization operator is not as easy to picture with gears and levers as some of the other operations. But the issue in 36 was that a person could worry that there was even more. Just as minimization is not as easy to picture and the need for it didn't hit Dedekind with great force, could there be something else out there that we have all missed?

That worry disappeared when Godel read Turing's masterful analysis. It convinced him that this is what a machine can do. He wrote, "That this really is the correct definition of mechanical computability was established beyond any doubt by Turing.'' Church felt the same way, writing that Turing machines have "the advantage of making the identification with effectiveness ... evident immediately.''


I'm still a little confused. It seems like Turing came up with something that works, and clearly fulfills precisely what Godel and Church and Turing were all looking for; but it also seems like it's a mathematically inelegant solution. Is it possible that in the future we'll find a way to show that Gödel's mu-recursive functions or Church's lambda calculus also precisely describe 'what a machine can do'? If so, it seems that from a mathematical standpoint, either of those would be a better foundation to start from. (I'll totally agree that from a teacher's perspective, the Turing machine is the better explanation.)

> Is it possible

Yes, the set of functions computable with mu recursion, or with lambda calculus, is the same as the set of functions computable with a Turing machine. (Turing in his original paper showed that the set is the same for his system and Church's, and the proofs for mu recursion and many other systems are well-known.)

> I'll totally agree that from a teacher's perspective, the Turing machine is the better explanation.

When I learned this, the instructor did not use Turing machines. They used mu recursion. That has the advantage that you don't first define a machine and then derive the set of functions, instead you go straight to the functions. But I agree that Sipser (which I understand to be the most popular text today) defines a computable function as one that is computed by Turing machine, and to my mind that has the advantage of honestly suggesting to students what they are about.


Thanks, I think I understand now. I thought there was a distinction between the mathematical idea of what computation is, and the engineering we've invented to implement that computation, and so I didn't really get the significance/literalness of what you were saying about mechanization.

It does seem weird to me though that we're letting our engineering limitations determine how we think about these things mathematically.


It's not as simple as engineering limitations determining how we think.

Turing machines do have mathematical advantages in some areas. See this siblings comment: https://news.ycombinator.com/item?id=28541800


I'm convinced that David Harland's Rekursiv[1:] machine _is_ the manner by which lambda et al. might be implemented at the machine level. Unfortunately, Rekursiv seems to have died an ignominious death, with the last Rekursiv chip having fallen off the side of a steamboat (apocrypha; I remember having read this, but I'm unable to find the original citation.)

The Rekursiv advantage is its ability to do recursion on the bare-metal. It's described as an object-oriented architecture. Its memory was a persistent store of objects, with each object having its size, type, and position known in memory.

[1] https://en.wikipedia.org/wiki/Rekursiv


I guess I'm confused as to why this is more than simply an interesting formalism. Complexity theory and models of computation are largely based around the Turing Machine model. Lambda calculus is an effective lens to design programming languages and prove equivalence between programs. We know by way of the Church-Turing Thesis that these two models are equivalent. The Turing Machine model is both better studied from a computation perspective and has much more practical realization; what's the point in creating something like this? It feels a bit like silly lambda calculus fetishism, but again maybe the value here is the actual computation formalism itself.

For me, personally, I really like the lambda calculus as a tool to organize and better my computational thinking.

I came into programming/computer science from a mathematics degree; I read some old treatises on recursion theory[1] and fell in love. I couldn't ever quite wrap my head around the Turing Machine formalism, but kept at it for a while. Finding Barendregt's paper [2] was a huge shock! I grasped it much quicker. So, yes, lambda calculus and the Turing Machine formalism are equivalent in explanatory power, but there are also reasons someone might prefer one to the other. So, yes, for me, the value _is_ the formalism.

As to why I think the Rekursiv would provide a good platform for implementing lambda calculus on the bare-metal, that's entirely due to Rekursiv's memory model advantage and the fact that it has a user-writable ISA. Why would someone choose to implement the lambda calculus on bare-metal? You call it "fetishism," I call it fun!

More generally, I just really like the idea of having a machine with a user-writable ISA.

[1] Theory of Recursive Functions and Effective Computability: https://openlibrary.org/books/OL2738948M/Theory_of_recursive...

[2] Introduction to Lambda Calculus: https://www.cse.chalmers.se/research/group/logic/TypesSS05/E...


FWIW Nothing wrong with having fun with computing, and implementing lambda calculus on bare-metal can be as fun as any other computational exploration, so good on ya!

Thanks for clearing up that it's the formalism you find interesting. Also, to offer a counterpoint, I'm also from a math background, but I was more of an analysis person (as much as one can be in mathematics where it's all related) than an algebra person, and when I did some FP research, it often felt like where all the algebraists go to play computer science. I feel like analysts are underrepresented in PLT (and overrepresented in complexity theory!) but this is already going off-topic, so cheers.


Nah mate, s'all good! It's great to hear your feedback; I am very much of the algebraist spirit myself (I barely passed my Rudin-based real analysis course). Our experiences definitely align. FP feels much more like my favorite parts of math.

Out of curiosity, can you identify any areas in PLT that could be made more analyst-friendly?

Intuitively, it feels that PLT is almost necessarily of the algebraist; to me, one of the big divides is the discreteness of algebra vs the continuity of analysis. Would it help if there was a PLT that exhibited a greater degree of continuousness? If so, what do you think that might look like?


Your comment made me think a lot! Thanks for that. If I had to "capture" what made analysis interesting for me, it's not just the notion of continuity, but the idea that we're analyzing the behavior of an object in the concrete instead of the abstract. That means taking an object and deriving all sorts of behaviors, instead of building up algebras from simple group/ring operations. To bring this back into PLT, it would probably mean the ability to place computational complexity bounds on functions/methods. Something like Mercury's Execution Modes https://www.mercurylang.org/information/doc-latest/mercury_r...

Very cool! I'd never seen this formalized, but it definitely has the color of some "hacks" I put together in Python (specifically the idea of "destructively updating" an item in an index, as opposed to appending an element to the end).

That's also a very interesting perspective on analysis. To get a better feel: is your joy of analysis in getting into the "internals" of an algebra, to directly derive properties of elements within the algebra as opposed to relying solely on global properties endemic to the construction of the algebra?


I always mention the Rekursiv in my talk about Smalltalk computers. Its problem was being a CISC in the era of RISC. The Manchester Mushroom from just a little later had initially the same problem but went through a major redesign when they saw the JIT compilers for Self on the Sun Sparc processor.

I'd never heard of the Mushroom! Thank you for making me aware of it! A quick DDG search returned no results; I'd appreciate any links you have on the topic!

I'd also love to see your talk! Do you think a RISC Rekursiv could be achieved? What value, if any, do you think such might have in our current world?


This is the current Mushroom page:

http://www.wolczko.com/mushroom/

The slides for my 2019 talk about Smalltalk computers (in LibreOffice and PDF formats):

http://www.merlintec.com/download/2019_slides_jecel_fast1v2....

http://www.merlintec.com/download/2019_slides_jecel_fast1v2....

and the video (1 hour and 13 minutes):

https://www.youtube.com/watch?v=tATpzsyC6OA

If you replace the Rekursiv's special microcode memory with a cache to allow microcode to live in main memory you will essentially have a RISC version of the machine. I have adopted this solution in several of my own designs.


Absolutely fascinating stuff! Thank you for the links, friend, and welcome to a highly-treasured spot on my hard drive; all of these materials are going directly to my ~/Research directory! Computer architectures are an area I'd love to enter when I'm more experienced, so these are an incredible source of inspiration.

I'm also deeply in love with the merlintec website. It's refreshing to see a website which has persisted since 1999(!), and without a lick of JavaScript it would seem!

Could I purchase a Merlin 6 or a Pegasus 2000?

Speaking of which, the way that the Pegasus 2000's eGUI documentation[1] is written is incredibly dear. The "world" and "heaven" analogies are great!


Thanks for this link, really interesting stuff. For a while there was a fashion in OS research for orthogonal persistence but this is much deeper and more elegant.

Absolutely! As mentioned, I'm very much a novice of computer architecture. It kinda blew my mind that Rekursiv was unique in its abilities. Also fascinating are LISP Machines, which seem to have received quite a bit of love from HN already.

If I may ask, from your perspective, what's more elegant about Rekursiv's design? Is it the philosophy or implementation? I'd also love any links to orthogonal persistence!


That's the central argument in the Church-Turing Theory isn't it? Church felt very strongly that the difference between his and his students' "elegance" and Turing's "practical" was a difference only in abstraction and that the two models were equivalent and translatable (you can write in one abstraction and convert it to the other).

That theory continues to bear fruit as the history of programming languages is almost entirely about bringing new and "better" abstractions to problems and then translating them to "dumber, more practical" machines. We have programming languages today modeled directly off the elegance of (though now sometimes still a few steps removed from being direct implementations of) the lambda calculus and mu-recursive functions, and the amazing thing is that they work great even given how "dumb" and "inelegant" our machines can be in practice.


> Is it possible that in the future we'll find a way to show that Gödel's mu-recursive functions or Church's lambda calculus also precisely describe 'what a machine can do'?

See also https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis


> we'll find a way to show that Gödel's mu-recursive functions or Church's lambda calculus also precisely describe 'what a machine can do'?

This is already proven to be the case. Mu-recursion (which IIRC is not Godel's general recursive functions, despite what Wikipedia says; Kleene was the originator of the mu operator. Godel's general recursive functions are defined in a separate, but yet again equivalent way that directly extends primitive recursion), Turing machines, and the lambda calculus are all proven to be exactly equivalent to each other. The fact that these three independent approaches to computability are all equivalent is why we have strong informal justification that the Church-Turing Thesis (a non-mathematical statement) holds.

Separately, there's a sentiment that I've seen come up several times on HN that somehow the lambda calculus, mu-recursion, or general recursion is more "mathematical" and Turing machines are less "mathematical."

I want to push back on that. The mathematical field of computability is based almost entirely off of Turing machines because there are many classes of mathematical and logical problems that are easy to state with Turing machines and extremely awkward/almost impossible to state with the lambda calculus and mu-recursion (this is consistent with my previous statement that the three are all equivalent in power because computability theory often deals with non-computable functions, in particular trying to specify exactly how non-computable something is). The notion of oracles, which then leads to a rich theory of things like Turing jumps and the arithmetical hierarchy, is trivial to state with Turing machines and very unwieldy to state in these other formalisms.

Likewise the lambda calculus and mu-recursion (but not general recursion) provide a very poor foundation to do complexity theory in CS. Unlike Turing machines, where it is fairly easy to discern what is a constant time operator, the story is much more complicated for the lambda calculus, where to the best of my knowledge, analyzing complexity in the formalism of the lambda calculus, instead of translating it to Turing machines, is still an open problem.

There is indeed a mathematical elegance to Turing machines that makes it so that most of the mathematics of computability is studied with Turing machines rather than the lambda calculus.

The lambda calculus on the other hand is invaluable when studying programming language theory, but we should not mistake PLT to be representative of the wider field of mathematics or theoretical CS.

EDIT: I should perhaps make clear that if I put on my mathematical hat, mu-recursive functions seem like the most familiar formalism, because they align with a common way families of things are defined in mathematics (specify individual members and then generate the rest through some relationship). However, I would contend that for the majority of mathematicians outside of computability theory, the lambda calculus and Turing machines seem equally "strange."


In a nutshell, Church asserted effective computability by saying "look what you can do with the lambda calculus". Turing took the philosophical approach saying "this is what it means to compute". To Godel, Church's argument was incomplete. Turing provided the direct argument. Godel was convinced.

You seem like someone who might have a good book about this bit of history? Or perhaps a blog?

About the Theory of Computation in general. A draft: https://hefferon.net/computation/index.html :-)

Wonderful explanation, thank you.

I don't think it's important to quibble over who's overrated or underrated among these giants of math and CS who already get tons of recognition (I'm glad Schmidhuber brings many other historical names into the narrative).

However, yes, I do think that 'mechanization' or physical implementation is a crucial piece of Turing's contribution that is wrongly ignored in this article. And I think without mechanization, there is no CS as we understand it.


I can only repeat my comment from down:

"Likewise, Konrad Zuse never got a Turing award despite having created the world's first working programmable general computer 1935-41. [...] It was pointed out that none of the computers built during the 1940s were influenced in any way by Turing's 1936 theoretical paper, [...]"


As far as I know, Konrad Zuse didn't prove that this strategy was a universal model of computation. In contrast, Turing proved that his universal machine could emulate any other machine, given the right program.

In my view, Turing's contribution is providing a plausible definition of computation along with a deep and comprehensive theoretical characterization of the properties of this model of computation. This is why Turing machines form the basis of theoretical computer science, and not other models such as lambda calculus. I think saying that Turing machines were adopted since they were merely more convenient is highly misleading.

I think this pattern repeats a lot: There are many cases where you can point to multiple people who invented similar ideas around the same time, but it is typically the person who provided the most deep and comprehensive treatment of the subject that ultimately gets most of the credit. This depth is not conveyed in pop science attributions such as "Turing invented computation", but this doesn't mean Turing doesn't deserve the credit.


I don't believe anyone has received a Turing award for creating a working computer.

https://en.wikipedia.org/wiki/Maurice_Wilkes

His Turing award citation: "Professor Wilkes is best known as the builder and designer of the EDSAC, the first computer with an internally stored program."


I think Wilkes got his award for his software contributions despite being well known for his hardware efforts.

The 2009 award to Chuck Thacker, on the other hand, was clearly based on his contributions to hardware. I have the impression that ACM had a change in policy around that time.

But officially I seem to be wrong:

https://amturing.acm.org/bysubject.cfm?cat=16 https://amturing.acm.org/bysubject.cfm?cat=15

In the "hardware" category Wilkes is listed as the only one, while Thacker, with Brooks, Cocke and Wilkes are in the "computer architecture" category.


Not OP, but I agree with them.

The word computer means multiple things. In one sense it the abstraction of universal computation. Imagine a world where actual physical computers didn't progress to universal computation, but were stuck being purpose built to the present day. The field of computer science would be utterly different because they couldn't actually compute anything with their science. They could just discuss computability in an abstract sense. It'd be like physics without the particle colliders or telescopes or lasers.

I think of the founders of computer science more like the founding fathers of America, rather than a single guy named Turing, but some are more memorable than others.


> Imagine a world where actual physical computers didn't progress to universal computation, but were stuck being purpose built

It’s not clear that you need a theory of computation to build a stored process computer. You might not clearly understand the theoretical capability of such a machine , but that wouldn’t prevent you from doing many useful things with it.


The article writes about your point of general computation and purpose built computers:

"Likewise, Konrad Zuse never got a Turing award despite having created the world's first working programmable general computer 1935-41. [...] It was pointed out that none of the computers built during the 1940s were influenced in any way by Turing's 1936 theoretical paper, [...]"


> Turing Machines, on the other hand, were instantly obviously mechanizable. It was clear that one could build a physical machine to run any Turing program without human input.

Harold Abelson points out in one of his lectures [0] that computer science isn't about computers any more than biology is about microscopes.

From that perspective, it is clear that Turing found an existing discipline of computer science and made major contributions to it. But it doesn't really seem right to say that he invented or founded computer science.

[0] https://www.youtube.com/watch?v=-J_xL4IGhJA&list=PLE18841CAB...


Computer Science is about what computers can do. To decide the latter, you have to first decide what a computer is. Turing Machines were the first abstractions that intuitively captured what it means to compute something

Sure, if you take "computer" to mean "something that computes". In that case it would include humans. There was a great deal of research into things that can be effectively computed that goes back even before the focus of this article. And of course "computer" used to refer to humans who computed before the invention of mechanical computers.

But it's certainly not the study of what mechanical computers can do. Among other things, mechanical computers all have bounded resources unlike Turing machines.


A whole bunch of Computer Science is only relevant for actual computers not Turing Machines. When I was (a student) at University they didn't teach Wait-Free versus Lock-Free concurrent algorithms, but that's an important 21st century topic because actual computers today are capable of many simultaneous operations, and it's totally possible to write a program that isn't even Lock-free and may literally make no progress despite spinning its wheels.

Yes that is true. There are applied sub-disciplines for all of the science. Typically you would call that something like "applied computer science" or "computer engineering", but CS is new enough that it hasn't split like that yet.

Nobody is saying that's not important. But the field as a whole began before mechanical computers were invented or practical, and there are several subfields that are basically indistinguishable from mathematics.


I mean, my university still has separate Chemistry and Astronomy departments, but at yours following your preferred nomenclature, how do they distinguish among all the resulting departments called stuff like "Applied Philosophy", "Applied Philosophy" and "Applied Philosophy" ? Or is it that for some reason you think only Computer Science should be singled out in this way?

I sense that you're trying to make a joke.

In any university, Chemistry is not the study of beakers nor is astronomy the study of telescopes.


But Chemistry cares about actual chemicals and Astronomy cares about our actual universe. Likewise, Computer Science is overwhelmingly concerned with actual computation. You aren't identifying a distinction here.

Both Software Engineering and Computer Engineering exist, as sub-disciplines, but it doesn't make sense to argue that somehow studying Graphene (a chemical which exists) is Chemistry while studying non-blocking algorithms is only Applied Computer Science somehow just because such algorithms could be used on an actual computer.


I mean, if you want to relegate CS to only mean theoretical CS, then you’d be in a small population: most theorists don’t want that, even. One of the cool things about CS is the rich interplay between applications and theory, and separating the two would only harm both.

Nobody is trying to relegate CS to a theoretical discipline. But theoretical CS exists, so it doesn't make sense to claim CS is the study of mechanical computers. Since it started as a theoretical discipline, it makes even less sense to say that the field was founded when we first figured out how to practically mechanize the ideas in CS.

Mechanical computers approximate Turing Machines, just like NP approximates RE: instead of asking “does this program halt?”, we ask “does this program halt in N steps?”.

If N is sufficiently (polynomially) large, the two are approximately equal.


> Mechanical computers approximate Turing Machines

Yup, so do humans with paper and pencil.


that's not the case, according to the article. If anything the article implies the opposite. Turing machines were a re-abstraction of Godel's computational model that provided a path to mechanical realization.

Also if you ever work with the turing machine (NFA hooked up to an infinite recording tape) it is not at all "intuitive" that this construction comprehensively captures the world of computation.


Godel's computational model does not intuitively capture what it means to compute. Godel himself was unconvinced that his model was universal, and it took Turing's paper to convince him that his model, lambda calculus, and Turing machines were equivalent and universal.

> Similarly, there's no clear way how to mechanize lambda calculus.

Is Lisp such a mechanization?


Any computer (with enough resources) can simulate any other computer. So if you have Lisp by running such a simulation (interpreter) on your PC it is not the same thing as a direct implementation of lambda calculus.

The famous Lisp Machines of the 1980s were reasonably traditional Von Neumann computers with nice features like tagged memory and optimized stacks.

Much closer to mechanizing lambda calculus, but still a bit Von Neumannish, is the SECD virtual machine for Lisp and friends:

https://en.wikipedia.org/wiki/SECD_machine


I'd say no, not on its own. It's not clear how to implement a Lisp program in hardware. You need some steps which today involve a von Neumann architecture and an imperative assembly language that implement Lisp constructs...

Just the references here are probably an amazing resource for early computer science, and I'm not going to argue against such a force.

Seems to be a lot of uneasiness, of late, about the way credit is allocated in science. IMO, it's mistaken to point this at the top: nobel laureates, heroic icons like Einstein or Turing. These figures are supposed to be idolized and idealized. Yes, this is "untrue," technically. But, it serves many purposes. A nobel prize win elevates science by singling out scientists for hero status. Achilles elevated Greece by giving Greeks something to collectively aspire to or adulate.

If you're already deeply interested in computer science, of course the detailed narrative recognizing dozens of brilliant early computer scientists is richer. Of course!

Where poor credit allocation matters isn't historical hero narratives, it's at the working scientist level. The grants & positions level. Here, it's important to be accurate, fair, etc. Being inaccurate, unfair or corrupt at this level creates actual deficits.


Right, we shouldn't underestimate the importance of narratives. We need narratives about the theoretical foundations of computer science, and Turing is the perfect figure to weave many of those narratives around. It's good for young people and the general public, and good for the field.

The Turing machine is a key conceptual model for understanding the basics of computation. The Turing Test is a great model for thinking about what being intelligent means. Hardly a week goes by without the term Turing Complete appearing somewhere in a HN comment. The fact that he also played an important role in the design and construction of actual practical computing machines, and did so to fight nazis seals the deal.

Of course there's more to it, there's plenty of credit to go around, but Turing is the perfect entry point for people to appreciate and learn more about all the work that went into the founding of computer science. It elevates the profile of the whole field.


We also shouldn't underestimate the importance of truth. Dealing with the world as-it-is has better results than interacting with a story we'd like to be true but isn't.

People waste their lives in service of causes and ideas that just are not grounded in reality. Not just in the philosophical sense that we cannot know truth, but in the practical sense of "the outcome you want will not flow from the actions you are taking today". Narratives are inferior to truth when it comes to making decisions.


I'm not advocating telling lies. Sometimes we simplify, and doing so can be perfectly appropriate. Unfortunately that does open the stage for nitpicking and pedantry.

Story telling is how our society has transferred information since we started to communicate-- understanding the map is not the territory, nor should it be. A beautiful narrative can convey important kernels more efficiently than endless minutiae-- Awareness of this is important and elaborations are helpful for those interested in the details.

I'm reminded of, "The Glass Bead Game," which discusses an academic society that forgets the names of contributors since they're all just part of the flow of humanity


The issue with stories is they focus on unimportant bits often for propaganda reasons. Pick some arbitrary first and every country can find someone to play up as a home town hero. The US just happens to be rather quite around who “invented“ electricity but longer lasting incandescent lightbulbs and kites in lightning storms that’s the ticket. The British tend to streamline the Benchley park narrative by dropping the preceding polish contribution etc etc.

In that context narratives end up glorifying endless minutiae.


Historians are all storytellers. Critical review is important. I really like this paper which starts by paraphrasing Jane Austen, "i think it quite odd history should be so dull, for a great deal of it must be invention" :

https://www.cmu.edu/epp/people/faculty/research/Fischhoff-Fo...


Arguably our storytelling was an efficient hack in an age before writing. A story is a very high-overhead, low SNR way of communicating kernels of truth, but it's robust over time, so it allowed transfer and accumulation of knowledge across societies and generations.

But then we've invented and perfected writing, developed symbolic languages and notations (e.g. math, musical), long-duration storage media for text, and eventually networked digital computers. In terms of communicating and preserving knowledge, stories are pretty much the worst possible option you can choose.

We're comfortable with narratives because we didn't have anything else for hundreds of thousands of years. Stories are pretty much hardwired into our brains. But that doesn't make them the right choice, now that we've figured out much better alternatives.

More than that, I'm personally suspicious of stories being used in communication. There's no good reason to use them, and there's plenty of bad ones - it so happens that what makes a good story robust over time is the same thing you need to manipulate people into believing lies and shut off critical thinking.


The main benefit of stories is that they are easier for people to remember than dry details. In terms of communicating knowledge, they are the form that are most likely to stick with us as opposed to going in one ear and out the other. Especially when it comes to areas where someone doesn’t have expertise. This is as you noted incredibly prone to manipulation, but it doesn’t change that it you want a random person picked off the street to actually synthesize the knowledge you’re trying to tell them, a story is by far the way most likely to work. And I’d say that’s important, since knowledge written down somewhere that no one remembers or cares about does nothing to change the way people act.

As far as preserving information goes, no argument there. Stories aren’t a good way to preserve the truth of matters for future generations. To look and determine if the stories told have truth in them requires more detailed writing.


Stories place ideas into context, not only making them easier to remember (as mentioned by another comment) but also easier to understand. Analytic philosophers are used to dry, precise language, but even they often rely on scenarios and narratives -- this can help reveal what the reader thinks intuitively and bring that into sharper contrast. By remaining story-free you're giving pedagogy the short shrift.

What has empirically brought more folks into careers in science, dry textbooks foisted by teachers or Star Trek? I'd argue Star Trek and science fiction more generally. You can chalk that up to human failings if you like, but inspiration is a need that can't be avoided if you wish to convince.


Disagree. A finely crafted but ultimately false story can be actively harmful. A young person may think that they are not of the same caliber as "the greats" and cannot make their mark on a field, which would discourage them from trying. All the while in reality "the greats" were never as great as the historians later depicted them. "Come on in, collaborate, and make a difference" would be a much more positive message and wouldn't be any harder to explain than what amounts to the creation of personality cults.

This is where the humanities has the tech world beat. While we quibble over correct narratives and seek one option, the humanities has been completely soaked in the idea that there are nearly unlimited narratives that describe any given human endeavor and they weave together into a rich and ever-changing tapestry.

This is why a historian can read, understand (both the pros and the cons), and respect books that represent an economic history, a social history, an information history, a microhistory, and even a great-man history of a given subject without trouble.

More reason for engineers to take humanities courses!


So can I continue to prefer my narrative? It seems to gather some upvotes and some downvotes, so at least it is interesting and elicits a reaction :)

Also, the more I learn about my heroes the more I realize that they never saw themselves as ubermensch. If anything, self doubt seems to be the common thread. I think this angle does not get enough attention.

However, I agree with you on a broader point. This is just one perspective. Here is another one: Turing the historical figure is necessarily oversold because many more people than Turing the real person contributed to his aggrandizement. Like all cultural icons, Turing the idea outlived and outshined Turing the man.


> So can I continue to prefer my narrative? It seems to gather some upvotes and some downvotes, so at least it is interesting and elicits a reaction :)

I think most of that reaction isn't coming from your narrative on history, it's from accusing other commenter's narratives of being false.


Nobody here is advocating telling false stories. Saying that Turing laid the foundations for computer science is not false. It's a perfectly valid opinion to hold. We might say it's a simplification, or even an exaggeration, arguably saying he's one of them might be better, but it's not a false statement.

Setting the record straight on this matter isn't nitpicking and pedantry, it's just giving credit where it is due. Since the intent of the "simplification" isn't to deceive, this shouldn't be a problem.

I don't think it's true that these narratives (false or true) are helpful, especially when their false. I think it's counterproductive, and ultimately takes away credit from others which can impede collaboration (say one country disagrees with another on who invented something).

Don't forget the tragic way society later betrayed him.

Why do we need idols, though?

If there was no narrative, no idols, no celebrities, would people be less motivated to do science? Why do we need to lie to ourselves so?

> If you're already deeply interested in computer science, of course the detailed narrative recognizing dozens of brilliant early computer scientists is richer. Of course!

Personally I'm mostly uninterested in who did what, but maybe that's just me. It seems obvious to me that nearly every scientific discovery could have been done equally well by millions of people, it's just a matter of who had the resources to be educated, who decided to research the problem, who managed to snipe the answer first, and who had the right connections to get it acknowledged. They're still great achievements, for sure, but they're not the markers of exceptional genius we want to think they are, not for Turing or Einstein, but not for anyone at all, really.


>>They're not the markers of exceptional genius we want to think they are, not for Turing or Einstein, but not for anyone at all, really.

The point isn't to prove that they're special. The point is that something special happened and these people are designated symbols for that... and they're kind of selected for being good at this. We're not doing this for them, they're dead. The celebrity of Einstein is a deification of his relativity theories. We need idols for our symbolic world, to work without them in the real one.


But what purpose do these idols or symbols serve, exactly? I'm speaking as someone who doesn't care who came up with relativity and doesn't care whether there is a founding person of computer science or not let alone who that would be, and would like to know what others see. Is it an inspiration thing? A motivation thing?

I'd say its a bit of both inspiration and motivation. That said, I think the main motivators for these kinds of idols/heroes are to craft ethical or normative stories for how people should (or shouldn't) behave as well as to assist with teaching people theories and concepts.

Learning about why correlation doesn't equal causation (and spurrious correlations) is more impactful if you also learn about Wakefield's sins at the same time. He's a villian.

Archimedes and the bathtub is a great story - and I learned it in elementary school and still remember it and the lessons it teaches. We like to associate people with events and they help for learning and retaining information.


Not necessarily a motivational thing, but events such as these become widespread and allows for easier dissemination of information.

It's easy to see then that such events allow for the eventual "recruitment" of other scientists, and in showing society that "science is working" and "solves important problems".

Both of which serve to enrich the scientific world with new researchers and funding to keep the engine running.


> Why do we need idols, though?

Because we're flesh and blood, i.e. utterly irrational.

> If there was no narrative, no idols, no celebrities, would people be less motivated to do science? Why do we need to lie to ourselves so?

Yes, definitely, a huge amount of what motivates scientists is desire for fame, being considered a genius, Nobel prizes, scientific immortality, and so on. It is entirely unrealistic to imagine that we can stop being like this, it's almost a religious belief, akin to thinking that, one day, people can live without sin.

> Personally I'm mostly uninterested in who did what, but maybe that's just me. It seems obvious to me that nearly every scientific discovery could have been done equally well by millions of people, it's just a matter of who had the resources to be educated, who decided to research the problem, who managed to snipe the answer first, and who had the right connections to get it acknowledged. They're still great achievements, for sure, but they're not the markers of exceptional genius we want to think they are, not for Turing or Einstein, but not for anyone at all, really.

This may be an accurate description of your personality, in which you're one in a million, or it may be that you're ignorant about the things that actually drive you. The vast majority of people are driven by some kind of desire for fame, recognition, status, upvotes, and so on.

Suggesting that Turing and Einstein were not "exceptional geniuses" is bizarre. Even in proper context, they were exceptional geniuses, just among other, lesser-known, exceptional geniuses. If we take your view seriously, we remove all human agency and uniqueness, we remove the idea of an "achievement" and we can only give credit to luck, the historical process, and various contingent circumstances. Even if your view is accurate, people simply cannot live that way. Creating narratives is part of what makes us human and narratives need protagonists (idols, heroes, whatever).


>Yes, definitely, a huge amount of what motivates scientists is desire for fame, being considered a genius, Nobel prizes, scientific immortality, and so on

That might do more harm than good. Once someone wins a Nobel, their productivity tends to decrease. Fighting over credit can be really toxic (see Newton vs Leibniz which probably stunted the development of calculus) and lead to less collaboration and knowledge sharing.

It may be unrealistic to think we can be different, but at least seeing that it's problematic should be unrelated to that. It's unrealistic to think crime will stop, but we should at least try to minimize it.


> This may be an accurate description of your personality, in which you're one in a million, or it may be that you're ignorant about the things that actually drive you. The vast majority of people are driven by some kind of desire for fame, recognition, status, upvotes, and so on.

Or it might be that people who are driven by fame and recognition are more likely to become famous than those who aren't, which skews our idea of what motivates people. Given how emphatic society is about fame and money as markers of success, I feel people tend to be mistaken in the other direction: many people think they are, or should be driven by fame or money even when it simply contradicts their personality.

Even if it was indeed the case that most people are motivated by fame, I think those who aren't are more like 1 in 3 or 1 in 4 than 1 in a million. It might be 1 in a million in actually famous people, but not in the population at large.

> Even in proper context, they were exceptional geniuses, just among other, lesser-known, exceptional geniuses.

If I am correct that millions of people had the capability, that would place "exceptional genius" at 1 in 1000, or 1 in 10000. I think that's a reasonable ballpark.

> If we take your view seriously, we remove all human agency and uniqueness, we remove the idea of an "achievement" and we can only give credit to luck, the historical process, and various contingent circumstances.

Whether we acknowledge exceptional geniuses or not, it remains the case that 99.99% of people are not exceptional geniuses. Are you saying these people don't have agency, or that they aren't unique? I think we all have agency, we're all unique, and we all have achievements. Some achievements are more impactful than others, some achievements are more impressive than others, but these are not necessarily the same, and neither is necessarily remembered, because what matters most is not the person or the achievement, but how the story fits in the narrative. In any case, you don't need to care about that narrative to care about or acknowledge agency, uniqueness or achievement.


Idolatry seems like an emergent property of human collective consciousness. You can try to ignore it (it's been tried), downplay it (also been tried), and ban it (and again).

> Seems to be a lot of uneasiness, of late, about the way credit is allocated in science.

Of late? You should read up on Newton/Leibniz hysterics over who invented calculus. The arguments over who invented the first light bulb, car, etc. Whether greek knowledge ( the foundation of western civilization ) originated in the near east or even in india. Heck, people still argue about who "discovered" america first. There is national, religious, ethnic, racial, gender, sexuality pride tied to "priority". It's not just in science/math, it's applies to everything.

> These figures are supposed to be idolized and idealized

Why? They weren't particularly good people. Neither were saints.

> Achilles elevated Greece by giving Greeks something to collectively aspire to or adulate.

Are you talking about science/knowledge or politics? But you are right on the point. It's what this is all about at the end of the day. Politics.

Without politics, the discovery/knowledge would be what is important. Because of politics, the people become the focal point.


An aspect of lionization is a wish and motivation to emulate and become heroic oneself...

But one soon realizes you probably won't get heroic credit even if you do contribute something heroic, neutralizing that encouragement.

Therefore, you'd better do it for the love of the work itself or for how it helps others.

There's no limit to what you can accomplish if you don't mind who gets the credit.


Not scientists but some of the best writers of the 20th century never got a Nobel, I'm thinking especially about Proust and Kafka (and I would say Céline was more worthy of the Nobel than Camus and especially Sartre), I'm sure the same thing happens in science in regards to this Swedish prize.

True, but writing has many forms of hero culture. Tolkien doesn't need a nobel, neither does Kafka. They became heroes regardless.

"Seems to be a lot of uneasiness, of late, about the way credit is allocated in science."

This is always been the case---medieval and renaissance thinkers would publish anagrams of their key findings because they didn't want to give someone else the advantage of knowing the finding but also wanted to prove that they thought of the idea. IIRC, Isaac Newton did not publish any of his findings until someone else threatened to publish their independent results. And he's known as the creator of calculus because the British Royal Society got into an academic slap-fight with the French.


There are a lot of true facts thrown in the article, but it does not explore the reason why this is.

I feel the era of great thinkers who single handledly performed disruptive breakthroughs in their field, the Galileos and Newtons, was over with the Einstein-era (and even Einstein also stood in the shoulders of giants).

No one works in isolation any more, and that is not a bad thing. You can subject any relevant figure to a similar analysis and come with the same results, it's absurd to try and come up with someone with such an overwhelming figure like Albert Einstein these days.

But if you need to choose a Founding Father of Computing Science for the general public, I'd say Alan Turing is the best candidate. Scholars will give due credit to Church, Zuse, von Neumann and all the others.


No one worked in isolation in the past either.

Move Newton, Faraday, Maxwell and Einstein 10kms away from where they were born, surround them by a different set of chimps and the story doesnt end the same way.

A good book from Niall Ferguson - the Sqaure and the Tower - makes the case tradionally Historians have studied individuals instead of groups because its easier to collect data on one chimp versus the entire troupe.


"I am, somehow, less interested in the weight and convolutions of Einstein's brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops."

<https://graph.global/?id=2851>


Yup, the influences on e.g. Newton happening to delve into reading up on Archimedes, Descartes, Fermat, and then synthesizing their inventions in his mind with lot of time on his hand, or for that matter Leibniz getting math tutoring from Christiaan Huygens seem to be crucial in relation to the invention of fluxions/infinitesimals. (Approximately from memory of reading Infinite Powers by Steven Strogatz).

Doesn't diminish their achievement in my mind.


Having lived both 10kms north and 10kms south of Newton's birthplace (in more flat Lincolnshire farmland) I'm not sure he's the best example for that argument!

The idea that history is wrong to focus on "chaps" derives from marxism; and Fergusson is very much anti-marxist. The marxian view would be that historical change is the result of economic forces; that if (e.g.) Turing hadn't done it someone else would have, because economics was driving history in that direction.

I'm sympathetic to the marxian view of Great Men; I think it's no coincidence that the related work of Godel and Turing was published within a couple of decades of one-another, or that the ideas of Copernicus, Kepler and Galileo emerged around the same time as one-another.

I'm certainly impressed by the greatness of Great Men; but I'm hard-pressed to find one whose discoveries were so remarkable, in the context of their times, that noone else could have been expected to make similar discoveries around the same time.


Alternative angle: among their insights and discoveries, the successes will be shaped by survivorship bias. When deciding what part of one's work to focus on, a person will pursue the things that are close enough to other contemporary work at the time, because it provides a short path to buy-in.

Turing was interested in a bunch of other stuff, but what people know about is the Computer and his war work (at Bletchley Park). His work on say Morphogenesis (why are zebra stripes different on each animal?) is little known.

But Turing probably isn't more important to how you get from the Treaty of Bern in 1874 (creating the UPU thus you could now practically write letters in Paris and send them to New York and it Just Works™ albeit it's expensive and slow) to the Internet than, say, Godel (more fundamental observations about the nature of mathematics that underpin computation) or Grace Hopper (the first compiler although today we'd say this is only a linker). Her Navy bosses couldn't immediately see any value for it. But Grace is apparently the first to make use of the meta-applicability of computing - the minutiae of actually programming the computer are tiresome, a lot of rote tasks perfectly suited to a machine, so, why not have the computer do those parts for you?


Tangential, but one thing I learned from dense computer history book The Dream Machine is that the term "von Neumann architecture" is improperly assigning credit:

https://en.wikipedia.org/wiki/Von_Neumann_architecture

von Neumann simply described the work of Eckert and Mauchly on the ENIAC in a written report. And his name was on the report which made people think that he came up with the idea, which was false. It also resulted in a patent dispute -- it's interesting to imagine what would have happened if the concept had been patented. The book goes into detail on this.

The wikipedia article also talks about Turing machines as a precedent that store data and code in the same place. But ironically I'd say that probably gives him too much credit! I think a large share should go to the people who designed a working machine, because it's easy to say come up with the idea of an airplane; much harder to make it work :) And to me it seems unlikely that the Turing Machine, which was an idea created to prove mathematical facts, was a major inspiration for the design of the ENIAC.

Finally, even though the author of this web page has his own credit dispute, I appreciate this elaboration on the credit assigned to Turing.


> von Neumann simply described the work of Eckert and Mauchly on the ENIAC in a written report.

Actually, it was a study group to come up with the successor to ENIAC (called EDVAC) which included Eckert, Mauchly, von Neumann, Goldstine and Burks. Von Neumann was the last to join, but wrote down the group's conclusions in a memo meant for the group. Herman Goldstine typed that up into a nice report but listed von Neumann as the sole author and distributed 24 copies to researchers. Many new copies of the report were made and circulated causing confusion about who had created the ideas.

George Dyson's "Turing's Cathedral", on the other hand, argues that von Neumann's close relationship with Gödel had a major role in getting the stored program idea adopted by the EDVAC group.


even Einstein also stood in the shoulders of giants

People have had that perched-on-giants feeling for some time:

This concept has been traced to the 12th century, attributed to Bernard of Chartres. Its most familiar expression in English is by Isaac Newton in 1675: "If I have seen further it is by standing on the shoulders of Giants."

https://en.wikipedia.org/wiki/Standing_on_the_shoulders_of_g...


I've heard this quote explained as an insult directed to one of Newtons enemies (either Leibiz or Hooke), referencing their short height. I'm not convinced it's true, but it is an amusing possibility.

Note that this might be a double meaning jab at hook. Sadly brilliant if true

This is just a theory, but I think this (assigning some major leap of science to few specific persons) is how society remembers things. I.e it is difficult, or even impossible, to go into the intricate histories of how things actually developed in middle or high school (and perhaps even in college), thus the people teaching us simplify it to make it easier to study and remember.

Once you start digging you realize that nothing is as simple. For example for physics, "Physics for Poets" by Robert H. March is an eye opener.


A typical medieval depiction of a great siege might be one king and two or three famous knights with a ladder assaulting a 5 foot castle manned by another king and a knight. Distilling stories to a handful of characters seems to make it easier for us to digest. I suppose it's easier for us to imagine ourselves as one of these people.

> But if you need to choose a Founding Father of Computing Science for the general public, I'd say Alan Turing is the best candidate. Scholars will give due credit to Church, Zuse, von Neumann and all the others.

I agree with this. It's certainly the case that I wish more people knew of Alonso Church and Kurt Gödel, but you have to realize in a "PR" sense that it's simply not going to be feasible to teach the general public about their contributions.

And Turing's contributions were genuinely ground-breaking, there's a reason that computer science is lousy with concepts named after or by him (Turing machines, Turing-completeness, even the word "computing" was arguably coined in "On Computable Numbers"). He also thought deeply and hard about the philosophical implications to computing in a way that others didn't (the "Turing test" being the obvious example).

In addition: when a mathematically inclined person describes any kind of mathematical concept to laymen, the first question is always "Yeah, but what is that actually useful for?", asked with a certain amount of disdain. With Turing, the answer is powerful: "How about defeating the Nazis and laying the foundation for modern society?". That case is harder to make for Church or Gödel: they obviously didn't work for the GCSE, and "lambda calculus" as a concept is a much more abstract thing than Turing machines, which laymen can readily understand (i.e. it's "just" a computer).

Add to that the fact that Turing's story is not just about computing, or code-breaking, it's also the story of the suffering that society inflicted on gay men. The fact that he was shamed into suicide is just all the more reason to celebrate him now.

I agree with the basic point of the article, but I have no issue with giving Alan Turing this title. He earned it.


Very much agreed about thinking deeply and having earned the title!

He applied computational thinking all over the place, showing great foresight in https://en.wikipedia.org/wiki/Turing_pattern


> The fact that he was shamed into suicide is just all the more reason to celebrate him now.

Please don’t diminish his legacy by repeating this lie. Turings suicide is contentious and circumstantial at best. His documented behaviour had none of the very common signs of suicide - there was no note, he had plans for later in the week, and none of his close friends noted any change in behaviour.


Suicide or not, his treatment by society was equally heinous and repulsive. Even he had lived a complete and happy life, his story would have been a bright example of the terror and evil of homophobia.

Everything builds on past work. Educated people, at least in europe, were quite well connected and aware of each others works in those times too.

Because those others mostly aren't anglos helping the war effort.

Nobody exists in a vacuum, but I think Kuhn was right: scientific progress is made up of long periods of incremental work split between short bursts of paradigm shifts. Those shifts are more likely to rest on a few very influential people who take the current state and look at it in a considerably different way. We haven’t had that in physics in quite a long time and might not again.

The problem is that the general public thinks the CS == Computers.

So, Founding Fathers of computing science becomes mixed - starting from those low brow thinkers we call journalists - with the idea of Founding Father of computing. And this is not only unfair, but technically wrong.


Disruption is a canary word to me now.

I'm not going to comment on the actual content that is mostly [1] scientifically correct, but Schmidhuber (the author) has a record of wanting to be the center of attention [2] (even though LeCun is not better on that matter). Also, a third of the sources are written by him...

Just look at his previous blog post [3], in which he explains that the most cited neural networks all cite works by him. These papers cite dozens of papers, so a lot of other groups that are active in AI can claim the same thing...

[1]: For example, Turing published an independent proof of the Entscheidungsproblem, in the [TUR] article, just a month after Church, that the article forgets to highlight.

[2]: https://en.wikipedia.org/wiki/J%C3%BCrgen_Schmidhuber#Views

[3]: https://people.idsia.ch/~juergen/most-cited-neural-nets.html


He just wants to get the facts right, esp the correct attribution to the original scientific contributions (who did it first).

Originality is easily defined as who did sth first.

This might not be the same as influence of some work. It might be that someone else does a lot of groundbreaking work which actually makes sth work (e.g. Goodfellow et al for GAN). You can say the GAN paper had more influence than Schmidhubers Adversarial Curiosity Principle.

Also, of course some newer authors might not know of all the old work. So it might be that people get the same ideas. So when Goodfellow got the idea for GAN, he might not have known about Schmidhubers Adversarial Curiosity.

The problem is, sometimes people did know about the other original work but intentionally do not cite them. You can not really know. People of course will tell you they did not know. But this can be fixed by just adding the citation. It looks bad of course when there are signs that they should have known, so it was really intentionally.

There is also a lot of arguing when sth is the same idea, or when sth is a different novel idea. This can be ambiguous. But for most cases which are discussed by Schmidhuber, when you look at the core idea, this is actually not so much the case. Also, this is also not so much a problem. There is less argumentation about whether sth is at least related. So this still should be cited then.

The question is then, which work should one cite. I would say all the relevant references. Which is definitely the original work, but then also other influential work. Many people just do the latter. And this is one of the criticism by Schmidhuber, that people do not give enough credit (or no credit) to the original work.


> of wanting to be the center of attention

It seemed more like he felt he was unfairly being uncredited. Which is probably why he wrote this - he now cares deeply about giving credit to the right people.


Surely the more noble cause for that would be giving more credit to others, rather than attempting to take away credit from a well known figure. This article is somewhat about the other important figures who's knowledge Turing's was built off, but its central point is that Turing gets too much credit.

I understand why he'd care about that if he'd been uncredited and watched peers be overcredited, but I'd hardly call it a noble work, even if it is understandable.


The article is full of credit given to a huge number of people.

The article is called Turing oversold, and the article is all about who should be getting credit instead of Turing. This isn't "Hey, are you aware of all these people who helped develop computer science", its "Turing is overcredited, heres a list of other people to support my argument"

I disagree. It read more like "Turing is overrated, you should credit these people instead."

>> I'm not going to comment on the actual content that is mostly [1] scientifically correct, but Schmidhuber (the author) has a record of wanting to be the center of attention [2] (even though LeCun is not better on that matter).

You_again wants his work and that of others properly recognised. For example, his article, titled Critique of Paper by "Deep Learning Conspiracy" (Nature 521 p 436) [1] that is referenced by your link to wikipedia, cites a couple dozen pioneers of deep learning, quite apart from Schmidhuber hismelf. Quoting from it:

>> 2. LBH discuss the importance and problems of gradient descent-based learning through backpropagation (BP), and cite their own papers on BP, plus a few others, but fail to mention BP's inventors. BP's continuous form was derived in the early 1960s (Bryson, 1961; Kelley, 1960; Bryson and Ho, 1969). Dreyfus (1962) published the elegant derivation of BP based on the chain rule only. BP's modern efficient version for discrete sparse networks (including FORTRAN code) was published by Linnainmaa (1970). Dreyfus (1973) used BP to change weights of controllers in proportion to such gradients. By 1980, automatic differentiation could derive BP for any differentiable graph (Speelpenning, 1980). Werbos (1982) published the first application of BP to NNs, extending thoughts in his 1974 thesis (cited by LBH), which did not have Linnainmaa's (1970) modern, efficient form of BP. BP for NNs on computers 10,000 times faster per Dollar than those of the 1960s can yield useful internal representations, as shown by Rumelhart et al. (1986), who also did not cite BP's inventors.

That is not "wanting to be the center of attention". It is very much asking for proper attribution of research results. Failing to do so is a scientific scandal, especially when the work cited has contributed towards a Turing award.

__________

[1] https://people.idsia.ch/~juergen/deep-learning-conspiracy.ht...


There's some confusion towards the end about Engima and Colossus:

However, his greatest impact came probably through his contribution to cracking the Enigma code, used by the German military during the Second World War. He worked with Gordon Welchman at Bletchley Park in the UK. The famous code-breaking Colossus machine, however, was designed by Tommy Flowers (not by Turing). The British cryptographers built on earlier foundational work by Polish mathematicians Marian Rejewski, Jerzy Rozycki and Henryk Zygalski who were the first to break the Enigma code (none of them were even mentioned in the movie). Some say this was decisive for defeating the Third Reich.

Yes, Turing worked on Enigma and the Bombe to automate breaking of the code. However, Colossus wasn't for Engima (it was for Tunny) and Turing didn't work on it. This paragraph seems confused about which is which.

Also, the fact that the Polish who did the original work weren't mentioned in the movie is just one of many things horribly wrong with that movie. It's so bad it shouldn't be considered canon.


  > Polish mathematicians Marian Rejewski, Jerzy Rozycki
  > and Henryk Zygalski who were the first to break the Enigma
  > code
I was in ~2010 or so in the Computer History Museum in Mountain View, CA. There was some exhibition related to Enigma there or maybe Enigma device on display (I don't remember what was it exactly now).

The person who toured us around was telling us a brief story of how the Enigma was broken, starting with Bletchley Park. Me or maybe my friend asked who was the first to break Enigma, and he immediately answered that it was Turing, then noticed our puzzled face expressions, and added something along 'ah.. yeah.. and Polish did some minor work too' :). Just an anecdote.


It seems a bit surprising not to mention specifically the harder _naval_ Enigma and the U-boat threat in the context of Turing.

> It's so bad it shouldn't be considered canon.

Since when are movies supposed to be accurate historical references? They are made to be entertaining, so facts get kicked out of the door from Day 1.


It is not like you cannot tell a good story here without embellishing and distorting it.

As it happens, Verity Stobb panned the movie (justifiably, IMHO), in her splendidly British style, for much more than just getting the facts wrong.

https://www.theregister.com/2015/01/26/verity_stob_turing_mo...


> It is not like you cannot tell a good story here without embellishing and distorting it.

You can't tell a complex story in a short time and numerous characters on screen. In books you can. In movies it's borderline impossible and therefore simplifying/dumbing things down is a filter you need to apply first.


you don't have to make stuff up - that's not simplifying things.

And dumbing-down is not equivalent to simplifying, unless you do it in a dumb way.

Talented writers take one issue, or one theme, from the big picture and weave a story from that.


Please share with me one movie with a complex storyline involving dozens of characters, you know, like in real life, then. I am curious.

Midway was an example of of a movie where the director and writer were concerned about historical accuracy, and so avoided the sort of risible nonsense that Stobb lampooned in the imitation Game. I am now curious as to whether you will respond in the way that I expect.

Canon? Isn't Turing being canonised the problem in the first place?

In a fictional universe a medium (book, movie etc.) being "canon" is the original author's seal of approval claiming "this happened".

I think this is a joking extrapolation to the real life, and the "author" here is, in best case real life, or as an approximation, historians' position based on evidence.


Correct. That's what I meant.

> Yes, Turing worked on Enigma and the Bombe

Since it wasn't linked, Bombe was based on the Polish Bomba machine:

https://en.wikipedia.org/wiki/Bomba_(cryptography)


The average person has no clue what theoretical computer science is.

But science fields do need marketing. All children have heroes they look up to. Putting focus on Turing's achievements is merely creating a pop star figure in the mainstream, which I think is a good thing: a smart dude works on a problem that saves World War 2 and now powers your phone and your TikTok app. Once you are actually interested in the field you can work out the nuances and the falsehoods in that claim.

Evaluating earlier work in some field throughout history always leads to a complex graph of achievements, but you cannot put that graph in the name of an annual prize. Do we change "Turing Award" to "Frege-Cantor-Godel-Church-Turing"?


>All children have heroes they look up to.

After reading that I sat here for a minute and racked my brain as to who my childhood 'hero' might be. I can't remember a single person.

It's amusing to me how much of intellectual work deals in a currency of status. Getting/giving credit for things appears to be the Prime Directive, at least among the observers. We've now graduated to not only stressing who is responsible but what demographic groups they are a part of.

Now, it could be that the real deal groundbreaking folks don't give a damn. Tip o' the hat to those people.


>a smart dude works on a problem that saves World War 2 and now powers your phone and your TikTok app.

The vast majority of people working anywhere near mathematics, physical sciences or electrical engineering (the 3 founding pillars of CS) in the 1920s and 1930s probably worked on problems related to WW2 during WW2. You can equally state that motivating claim for a lot of other people.

I think Turing gets the Media Treatment^TM because there's a lot of Tragic Hero Energy in his story:

<A gay man in an era that summarily rejected him [and we tell this story in an era that is extremely oversensitive and hyper-reactive to this particular sort of injustice]; a smart, shy pupil whose closest childhood friend (and suspected lover) died early of a now-extinct illness; a mathematician who dreamed of how numbers and lookup tables could hold a conversation, saw them used and counter-used to destroy cities and murder millions, then was finally rewarded with prison and humiliation by the people he fought for.>

Turing himself off course deserves all praise and glory and the righteous anger for how he was treated in his last years, but I think our era's affinity for him is just the old mechanism of people digging the past for battles that reflect the moral values they're currently (fighting for|winning|losing), see also the misguided claim that Ada Lovelace is the first "computer programmer", usually followed by a looong screed about Women In Tech.

We just like a good story to exaggerate and make it reflect our current moral memes, and the idea of a Man|Woman Ahead Of Their Times is a catch by this standard.


Oversensitive and hyper-reactive?

Jailing or sterilizing gay people for having sex is evil. End of story. It has only been 20 years since this was the law in some US states. I see no reason why vigorous rejection of this sort of policy as monstrous can possibly be seen as "oversensitive and hyper-reactive".


You missed the entire point.

The point isn't that this oversensitivity is misplaced, the point is that it's moral outrage porn that the tellers of the story use in a smart way to get a reaction from you.

This isn't necessarily a bad thing if it's just one or two story among others, after all the purpose of art is to get strong reactions out of its audience. But when every such story has to lean hard into the moral aspect to the exclusion of all else it becomes a trite children story told for nothing else but feel-good points.

Consider the amount of articles on trump during his presidency. How much of it was high-quality investigative journalism telling you things you don't know, and how much was "Trump tweeted something shockingly stupid, here are a list of experts you don't need telling you this is shockingly stupid, this means End Of Democracy (EOD)" ? The latter articles are technically true, but it's trite and accomplishes nothing but pulling on your memetic levers to get you to like/share/feel-bad|good-all-day.


>a smart dude works on a problem that saves World War 2 and now powers your phone and your TikTok app.

So much for the Polish Cipher Bureau. Not so many tragic hero opportunities there.


Also, this is a confusing mess anyway. Turing worked on the Bombe, but that's not a computer at all. Bletchley has one you can see in operation. It's a very complicated machine, but today a child looking at it can't help but wonder if this wouldn't be much easier for a computer.

The Bombe helps break Enigma, and thus is an early part of Ultra and arguably does "save World War 2" but it has no more relevance to your phone or your TikTok app than does the Rubik's cube or the slide rule.

Colossus isn't very far from the Bombe today, you might likely visit both on the same trip, but Turing didn't build it, and although it's clearly in some sense a computer, it is critically lacking in some features you'd want from a general purpose computer since it had a single purpose, to break Tunny in the mid 1940s.

In some sense Colossus is relevant to your phone and TikTok, because it is a computer, but, Turing didn't work on it and it isn't their direct ancestor by any means at all.


This has changed over time. In early computing, the people mentioned were Eckert and Mauchley, who designed the ENIAC, and von Neumann, who figured out how a practical CPU should be organized. Also Shannon, who figured out that relays basically did Boolean algebra and could be analyzed formally.

Bletchley Park work was still classified in those days, so not much was known about that. Much to the annoyance of Tony Flowers, who built the Colossus code-breaking machine but couldn't mention it in post-war job-hunting. (Incidentally, Colossus was not a general-purpose computer. It was a key-tester, like a Bitcoin miner ASIC.)

As I've pointed out before, the big problem was memory. IBM had electronic addition and multiplication, with tubes, in test before WWII. But the memory situation was really bad. Two tubes per bit. Or electromagnetic relays. Or electromechanical counter wheels, IBM's mainstay in the tabulator era. To store N bits, you had to build at least N somethings. Babbage's Analytical Engine called for a memory of a ring of rows of counter wheels, a mechanism the size of a locomotive. Access time would have been seconds.

Much of early computing was about kludges to get some kind of memory. The "Manchester Baby" had a CRT-based memory, the Williams tube. That was the first computer to actually run a program stored in memory. Frederic C. Williams, Tom Kilburn, and Geoff Tootill, 1948.

After that, everybody got in on the act. Mercury tanks and long metal rods were built. Really slow, and the whole memory has to go past for each read, so twice the memory means twice the access time. Then there were magnetic drum machines. Magnetic tape. Finally, magnetic cores. At last, random access, but a million dollars a megabyte as late as 1970. Memory was a choice between really expensive and really slow well into the 1980s.

Turing was involved with one of the early machines, Pilot ACE/ACE. But he quit before it was finished.


Well, this article certainly shoots itself right in the foot.

If the problem is that some important early contributions to CS are being overlooked, the solution is to promote those contributions.

By framing this as Turing vs. others, the focus is squarely on Turning and his contributions. It puts itself in the position of having to beat down and minimize Turing’s contributions before raising up other contributions. Pretty much setting itself up to fail to convince very many.

Instead, e.g., present the narrative of those early contributions, showing how they provided the foundation Turing worked from.

(edit: I should add: I understand perfectly that the point of this article probably isn’t to actually convince anyone of anything, but is just a hot take meant to get people worked up for the page views. So mission accomplished, from that perspective, I guess.)


Having an educational background in physics I find the Turing Machine a much more intuitive model of computation than say lambda calculus. To me this is Turing’s main contribution: linking the abstract world of computation to the physical world, and proving that a very simple physical machine can perform any computation (Turing completeness). That’s no small contribution.

> and proving that a very simple physical machine can perform any computation (Turing completeness)

This is a misunderstanding of the Turing machine model. The Turing machine is not designed to be a realistically implementable physical machine, and indeed there are no details in Turing's paper on how such a physical machine could be achieved.

Instead, the Turing machine model is designed to be a mechanistic model of what a mathematician does (in rote day-to-day tasks at least). The tape is a representation of the notebook where the mathematician writes down notes. The read head is a representation of the mathematician reading from the notebook.

It's fascinating to read the paper because of this, since it spends quite a few paragraphs showing that this simplification doesn't lose anything of the mathematician's work. It spends some time noting that even though paper is two-dimensional, it can be losslessly compressed on unidimensional tape. It spends time noting that writing/reading one symbol at a time is a good enough approximation for human writing/reading. It spends time noting that the next step doesn't need to depend on more than the current symbol + internal state, as human attention is also focused on a limited number of symbols.

This is actually why Turing's paper is so convincing on the argument of universal computation - not because the machine is realizable, but because it's hard to invent anything that a mathematician does while calculating that is not captured by the Turing machine model.

I very much recommend reading the first part of the paper [0] to see this argument (the second part, where it is proven that this abstract machine can in fact compute all known numbers, is more technical and flew over my own head).

[0] PDF https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf


> This is a misunderstanding of the Turing machine model. The Turing machine is not designed to be a realistically implementable physical machine, and indeed there are no details in Turing's paper on how such a physical machine could be achieved.

I’ve read the paper. I think we just take different things from it, possibly because you have a background in mathematics?

To me, the main takeaway (if I imagine reading it in 1936) is that a universal Turing machine is not all that complicated, and arouses the “I could build this thing”-intuition.

That of course doesn’t mean that Turing intended it to be realizable, that’s not my point. But he appeals to an engineer’s intuition. It’s that intuitive link that’s valuable and unique IMHO.

BTW, I think your takeaway is probably clearer in Gödel’s work.


The Turing machine has a tape of unbounded size so can’t be built simpliciter.

Moreover although it turns out that that model of computation is very robust and sufficient for all purposes in physics (unless black holes or something allow hypercomputation) Turing does not really definitively show that and in a way that can’t be definitively shown. All we have is a lack of counterexamples (admittedly a very convincing one.)

I don’t see why this intuition is that helpful generally either; Turing machines don't really help at an implementation level with modern engineering problems as far as I can tell. Most of the time you know that what you want to do is possible in finite time &c.—presumably the difficulty is doing what you want to do, and going via the Turing formalism would be a bit odd.


> The Turing machine has a tape of unbounded size so can’t be built simpliciter.

On the contrary, I think this is one of the advantages of Turing’s model: I can imagine standing there in my garage looking on as my universal Turing machine is running low on tape on the left side, and then simply attaching a new roll of fresh empty tape at the end, holding it as it is fed into the machine. :)

It’s simply the least leaky abstraction of computation.


Exactly this. Unbounded doesn't mean infinite, and people are sometimes confused by the distinction.

Including me. What is the difference?

Some parts of mathematics deal with infinite sequences, that is, actually infinite lists of numbers. It's usually assumed, and important for analysis, that these numbers are considered to be "all there" right from the beginning. You can do operations like: Compute the limit. Add up all of its elements. Determine whether two sequences are identical for all entries after the trillionth.

I think this is often part of the misunderstanding when you stumble into a post by someone who's confused about 0.999... = 1. People sometimes write things like: "0.999... only moves closer and closer to 1, it never reaches 1." I think that highlights a deeper point than people usually give these comments credit for. The thing is, 0.999... doesn't "move" anywhere, it's considered a completed object right from the beginning.

Anyway, the point is that Turing machines are not like this at all. They only look at a fixed-size part of the tape during each step, from this follows that they have only used a finite amount of tape at each point of their execution.

So for any given (halting) computation, you don't actually need an infinite tape, you just need "enough", without changing the result. This is important because it makes Turing machines a model for practical computers. For example, the device you're reading this on has gigabytes of tape, and that's big enough for many, many, many kinds of computation.


In a theoretical sense, an unbounded number is always finite.

In a practical sense, turing machines don't voraciously consume tape. Adding extra feet of tape gives you an exponential increase in what you can compute. So if you set up a program to be reasonably judicious with its tape use, you can just say that if it reaches an end you pause it for a day, head to the shop, and buy another reel. Big computations take a lot of time anyway.


Any given number is always bounded. I am not sure it makes sense to talk about an unbounded number

The size number of the unbounded tape size is always finite, is that better?

The usual difference is just predicate ordering -- (1) for every program there exists a tape big enough vs (2) there exists a tape big enough for every program. In the first case, each individual (valid) program can get by with a tape of _some_ fixed length, but there's no bound on how big that requisite length might be. In the second case, since the tape requirements can be arbitrarily high you would need a legitimately infinite tape to service all possible programs.

IMO the given example muddies the waters a bit by conflating the conceptual tape a given machine is running on (which might be infinite for non-halting programs) with the physical tape you've used so far (for which it suffices for such tape to be finite at any fixed moment in time, though the amount needed might grow unboundedly as time increases).


Note that tape usage typically depends on the input, so I would distinguish programs and computations (program + input).

I'm with you, I also found Turing's argument that his machine model captures all of computation very convincing and pointed that out in another thread.

However, for this argument to work, we need to accept both that all computation is captured by Turing machines, and also that what Turing machines do is in fact computable. In essence, Turing machine <=> realizable machine. Maybe some people are more impressed by one, others more by the other direction of that double implication.


> the Turing machine model is designed to be a mechanistic model of what a mathematician does (in rote day-to-day tasks at least)

Or more accurately, what human computers did in those days (i.e. the rooms full of people algorithmically working out numerical calculations for e.g. physicists or whatever without understanding what they were doing beyond the mechanical steps they were taking). In other words a formalization of so-called effective methods.


This dualism in CS still carries on to this day. Essentially, the lambda calculus is the forefather of the functional approach to computation, while the Turing machine epitomizes the imperative paradigm.

> and proving that a very simple physical machine can perform any computation (Turing completeness)

Not proving, conjecturing. It's not proven until this day: https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis


It can never be “proven” because the notion of “any computation” being referred to is informal.

Also, it can’t perform any computation, if we say hypercomputation is a form of computation. Hypercomputation is (as far as we know) physically impossible, but so strictly speaking are Turing machines - a true Turing machine has unlimited storage, and an unlimited amount of time in which to complete its computations - any Turing machine you could physically construct would only be a finite approximation of a real one.


Ah, ok, I should have said “any computation that can be performed by any Turing Machine”.

So okay yeah it's Turing Completeness that matters the most to me as computer science, on a purely pragmatic basis: When people are pitching the latest whiz-bang doodad, the answer is always, "This isn't doing anything we couldn't already do, so how does it make things easier and do it efficiently enough for our purposes?" That's the proper question when it comes to monads, coroutines, actors, non-blocking blah blah, etc. etc.

That's really important in an industry saturated with hype, elitism and general nonsense. Anything I can do in Rust, you can do in Assembly, so I've got some explaining to do (I can probably succeed in this example, others maybe not).

If Turing actually failed to deliver the goods on "completeness", I'd really like to resolve that.


Back to undergraduate days > a decade ago, I think I learnt both lambda calculus and Turing machine at the same class: Formal Language and Automata Theory.

Of course Turing machine is more easy to understand because... it's a theoritical machine, after all. On the other side, lambda calculus was weirder, I didn't get it until learning Haskell :D


> I find the Turing Machine a much more intuitive model of computation than say lambda calculus

I think register machines are more intuitive than Turing machines - they are much closer to how real world computers work.


But that's the opposite direction from Turing's intention. The point of the Turing machine model is for the machine to be both mechanically plausible (no magic required) but also equivalent to what a human does when performing computation.

The Turing machine model is a mechanically plausible abstraction of a human performing a computation by following some steps in their mind and with a notebook. The tape stands in for the notebook. The read head stands in for the human's eyes. The write head stands in for their pencil hand. The internal state of the machine stands in for their mental state. At every step, you either read a symbol from the notebook tape and change your mental state in relation to this symbol, or write a symbol based on the current mental state. The procedure itself can be written on the tape, and you can occasionally refer back to it.

The original paper spends a good few pages working out this metaphor and showing that the machine model perfectly abstracts the mathematician's work of computation.


Yes, on the abstract computation side of the link register machines are much more intuitive.

But on the physical side of the link they are much less intuitive IMHO: it’s much less clear that “this is just a machine that I could build in my garage”.


The Turing machine is definitely not some machine that you could build in your garage. None of the mechanisms are specified or even specifiable. The important part, to Turing ateast, is that it perfectly matches what a human does while computing a number, and that there are no magical steps like 'thinking'. Read symbol, change internal state, write other symbol down, rinse and repeat.

All of the details of how a symbol is read, recognized, how it alters the internal state, how the next symbol is chosen, or even how many symbols you actually need are not mentioned and even considered irrelevant. Turing wasn't building one of these, he was proving that this model captures all known computation, and that even so it is undecidable whether this machine would ever stop for any arbitrary computation.


No “the Turing Machine” isn’t a machine you can build in your garage. It’s an abstraction.

But any individual Turing machine is. Building a simple one is not very hard, and you can imagine supplying it with more and more tape as it needs it.

It’s thus the only model of computation that I can fully imagine “working”. And that to me is the beauty of Turing’s contribution.


The “physical side” was probably more important when Turing first came up with the idea, and people struggled to conceive of computers because none had yet been built. Nowadays it is arguably less necessary because they are an essential part of everyday life, and most people learn some programming before learning theoretical computer science.

In the days where you can buy a ram chip, a register machine is a really easy abstraction.

If you're trying to imagine something you can mechanically assemble out of discrete compoonents, it's not so great. You need an unlimited number of components hooked up in complicated ways.

A turing machine is a fixed-size and relatively simple box, plus a long tape that feeds through.


Nice to acknowledge the work of lesser known lights, but the article kind of reduces to another variation of, "your co-ordinates for belief and identity are arbitrary and unstable, the protagonists of the story you tell yourself are villains, it's probably because you are a shameful example of a person and should expect punishment, but you can redeem yourself by renouncing belief and aligning to the narrative of progress in the present."

It passes the popular bar for new knowledge because it illustrates a conflict and resolves it, but it's in a cognitive style that seems to conflate debasing knowledge with discovering it. It is how people are educated these days, where it is sufficient to destabilize and neutralize target ideas instead of augmenting or improving them, so it's no surprise, but the article seemed like a good example to articulate what is irritating about this trend in educated thinking.


It seems to be in style now to try to tear down the public perception of past great minds, I recently read a similar article about Hawking. And while this article may have some points, I don't think the overall framing is fair.

I think everyone with an interest in theoretical CS should work through Turing's 1936 paper at one point in their life. For me, the important part of that paper is how convincingly it argues that the proposed machine model is not just yet another model of computation, but a complete model of computation: that everything you could reasonably call computation is encompassed by these machines.

So there's a finality to Turing's definition of computation: these problems are unsolvable not just with this model, but with any reasonable machine model. It's very hard to make the same case for lambda calculus, which is (in my opinion) a large part of what made Turing's paper so groundbreaking.


After watching the Imitation Game, I did some googling/trying to find out how the Bombe worked. I expected it to not be very exact, but I also kinda felt like the entire narrative around that history in the industry was just super off!

- The core mechanisms of the machine for running the Enigma "quickly" was from the Polish - The machine wasn't even a generalized computer!

I just felt really misled! Perhaps the biggest thing is Turing probably ended up doing good amounts of contributions to the academic/theoretical side and the practical side, but it feels like we are missing opportunities to describe the confluence of so many people's ideas in this period of history to end up at the current "machines that read instructions and then act upon them in a generalized fashion, very quickly".

This article seems to be that, and it's super intersting


I knew some of the real history beforehand and the movie really annoyed me, so I’m glad to hear you were able to uncover the facts yourself and had a similar reaction!

Among so much that’s just plain wrong, I really dislike the insidious idea that Turing’s horrible punishment at the hands of the state was wrong because he was a unique genius and war hero. No, it was wrong because he was a human being and being gay should not be a crime!

That line of thought makes it harder to argue that no, Turing may have been a genius but wasn’t unique, he was just a significant player in a rich field. That doesn’t make him any less interesting.


> I really dislike the insidious idea that Turing’s horrible punishment at the hands of the state was wrong because he was a unique genius and war hero. No, it was wrong because he was a human being and being gay should not be a crime!

100% agree, an unfortunate mentality all too present in society, where we tend to build narratives of feeling bad for people because are exceptional, and not because they are people

See the classic kids story of “oh the store tried to kick the hobo out but actually he was a millionaire!!!” How about treating all people like human beings even if they aren’t like… valuable to you


I think the attraction to the Turing story is that it is a classical tragedy. If what happened to him happened to any gay man, it would be wrong. But since it happened to one of the greatest geniuses of the 20th century, who may have had other breakthroughs that could have pushed mankind forward, it is a tragedy. A tragedy for all mankind. Mankind suffered a huge loss due to its own moral failures.

The Imitation Game was inaccurate and horrible every way you look at it.

https://en.wikipedia.org/wiki/The_Imitation_Game#Historical_...


What I find so strange about The Imitation Game that all of this is pretty well-known; anyone who has skimmed the Wikipedia article of Turing and the overview article on breaking Enigma knows that the movie is pretty much complete horseshit. Most of the alterations in the movie removed things that would have made the movie more interesting instead of the utterly bland story they made up.

Given the movie was a major box office hit and critically acclaimed, I suspect the producers knew what they were doing.

Just don't expect historical accuracy from a Hollywood movie. Cleopatra didn't look like Elizabeth Taylor either.


Cleopatra most likely looked like a horribly inbred Greek, seeing as her family tree has a literal circle in it.

I'm struggling to think of any movies that are really historically accurate - the point is to tell a good story to get people to watch it to make a profit.

Edit: I'm Scottish so Braveheart is the obvious example - entertaining movie but wildly inaccurate and even manages to get who/what the term "braveheart" refers to wrong.


Master and Commander (the Russell Crowe film)? While it's a fictional story, I've heard it said that it captures the period extremely well.

Gettysburg and its prequel (Gods and Generals). The dialogue and character motivations may or may not be accurate, but the battles it depicts are pretty accurate.

When I visited Bletchley Park a few years ago, I got into a conversation with one of the docents about the film and it was clear that they had a very low opinion of it there. Turing deserved a better film.

Relative to the 0 times I have ever heard his name mentioned vs. the countless times I have read about him on manic midnight wikipedia binges, I think Zuse seems very unsung.

Agreed. Zuse did a lot of cool stuff.

"I think Zuse seems very unsung"

Of what I know, this might be, because he worked for the Nazis and his prototypes were mostly destroyed in the war and somewhat forgotten and seemed to have not been influential to the general branch of computing.


There is an indirect influence of Zuse in the daily life of most programmers.

The "for" keyword used in the "for" loops comes from Algol 60, which inherited it from its predecessor from 1958, IAL.

The "for" loops were introduced in IAL by Heinz Rutishauser, who had used "fuer" loops (i.e. "for" in German) for the first time in 1951, when he had published a paper describing a computer programming language, 3 years before Fortran introduced the "do" loops in 1954.

In the Rutishauser paper from 1951, he quotes Zuse with his paper about a programming language (published in 1948) as the inspiration for his own improved programming language, which included the "for" loops.


Oh sure. I know why, but I find computer engineering a bit more interesting than theory so I'd rather read about him than quite a few others.

Von Braun also worked for the Nazis.

Interesting to speculate what would have happened if Zuse had been paper-clipped to the US and given a huge budget.


One funny anecdote about Zuse during the war was that he managed to save his Z4 because it was named as V4 in the paperwork. The wehrmacht officers thought it was one of these retaliation weapon V1, V2, V3 so V4 was very important and got high priority to be hidden away somewhere.

Not completely surprising (from Wikipedia):

“While Zuse never became a member of the Nazi Party, he is not known to have expressed any doubts or qualms about working for the Nazi war effort.”


The problem with technically correct allocation of of credit is that to be truly technically correct, it gets very messy very quickly, as all knowledge is built on other knowledge. The credit for founding computer science would be "[absolutely massive list of people] and finally of course, the one we call Ung, who discovered the wheel".

That might seem pedantic and it is, but you need to define exactly where the line is drawn and more so, give a good reason why. In fact its not even that simple, WE need to decide and agree on where the line is drawn and all of us agree why. Otherwise one mans pedantic is anothers important creditation.

Obviously that's not going to happen anytime soon, so for now, figureheads like Einstein and Turing do the job. And they do certainly deserve credit to some degree. That or we stop giving credit completely, which a). seems like a good way to destroy knowledge and b). isn't going to happen anytime soon.

Edit: As another commenter pointed out, if Einstein or the like were born somewhere else and lived around a different group of people, theres a chance he wouldn't become a figurehead, or he would make less or more or different discoveries. Therefore, theres a third option for creditation, in which everyone who has ever lived up until those discoveries has equal credit. If I were 60 or so years older, I'd be as much to credit for the turing machine as Turing himself. So would you. Of course, this is pretty much as good as no credit to anyone at all, but fixing it again requires a joint agreement on where the line is drawn


Those aren’t the only options. We can give credit without creating mythic heros. Giving technically correct precise allocation of credit is messy, you’re right. But so is defining what ‘tall’ means, so the precision is beside the point. You don’t need to define exactly where the line is drawn.

It reminds me of voting systems, but maybe that’s just because of the election yesterday. If you want to give singular nontransferrable credit, the things you say are important because giving someone credit takes it away from someone else. Division and fighting become the right answers. But if you spread the credit around, saying Leibniz and Newton both get calculus credit (and probably not just those two!), then discussions of which one should get the title of The One And Only Calculus Hero just seems absurd.


Defining tall doesn't matter so much, a) because we have a precise measure of height and b) because we all know and fully understand that the definition of tall is completely subjective. You might say the same thing about credit, but then you also need to accept that for a lot of people will gravitate towards these mythic heroes as appropriate credit.

The problem with my wheel example is that it demonstrates the absurdity of trying to assign credit to anyone involved, but it doesn't quite demonstrate how difficult it is to even draw a rough line. Does the inventor of the abacus get credit in the creation of computer science? The discovery of electricity? And what about all the (comparatively) minor inventions and discoveries off the back of those?

As far as I can see it, it either needs to be completely subjective, or there needs to be a line. Maybe it doesn't need to be incredibly specific, although at that point some subjectivity creeps back in


Agreed. The mainstream needs a single or at least smaller list of heroes that represent a larger effort. Turing is especially suited for this given the oppression he suffered for his sexual orientation combined with the impact of lives saved in WWII as a result of his work.

I need to find a reference for this:

Goedel apparently believed that it was impossible to explicitly enumerate all the partial computable functions over the integers, until Turing proved him wrong. He reasoned as follows:

- You can't enumerate the total functions because Cantor's diagonal argument is constructive. Given a claimed enumeration of N^N, you can perform Cantor's diagonal construction to produce a computable element of N^N not in the enumeration. So N^N is computably uncountable.

- Classical set theory says that a superset of an uncountable set is also uncountable.

The problem is that while a superset of an uncountable set is uncountable, a superset of a computably uncountable set may instead be computably countable. The partial functions over the integers show that this is indeed the case.

Cantor's construction is blocked because a partial function can output a degenerate value called non-termination. A computable function must map the non-termination value to itself. Since it isn't mapped to something different, Cantor's construction can't produce something outside an enumeration.


I think the more widespread term for what you call "computably countable" is "computably enumerable", perhaps in part to make the distinction more clear. For example, the set of all natural numbers is (trivially) c.e., but has many non-c.e. subsets. But to understand this is understanding the Entscheidungsproblem, so it shouldn't be surprising that this was less clear before that was resolved.

Computably countable is also a correct term. In line with this approach to computability theory: http://math.andrej.com/asset/data/synthetic-slides.pdf

It shows that the unsolvability of the halting problem follows from:

- The computable uncountability of N^N, which can be proved using an identical argument to the one Cantor used to prove the set-theoretic uncountability of N^N.

- The computable countability of the partial maps from N to N.

If the halting problem were solvable, then the second bullet point would contradict the first. So it's essentially a weird twist on set theory that uses constructive logic.


I don't doubt that the term is in use, and I understood what you meant. But it's not listed among seven (!) synonyms for computably enumerable on Wikipedia, and more the point, the slides you linked to also don't contain that term.

However, that's not the point I wanted to make. I wouldn't like calling it computably countable even if everyone else did, simply because it gives the wrong intuition about subsets.


> the slides you linked to also don't contain that term

The term "countable" is used in the slides, where it means computably enumerable. The adjective "computably" is used when there's a need to distinguish set-theoretic notions from similarly behaved computability-theoretic notions. Otherwise the meaning of the term "countable" can be context-dependent.

> it gives the wrong intuition about subsets

In constructive logic, a countable set can contain an uncountable subset. The misleading intuition (in the context of non-classical logics) is based on classical logic where countability is a statement about the size of a set. Whether you think constructive logic is a good way of explaining computability theory is another question, but it's certainly a viable way of doing it.

It's like how the term "line" can mean something different in hyperbolic geometry from what it means in Euclidean geometry. You could argue that it might mislead people about the nature of parallel lines, but that's why hyperbolic geometry is not Euclidean geometry. Another example is using "multiplication" to refer to an operation on matrices, which might make people think that AB=BA when that usually isn't true. Mathematics is all about re-using terms and pointing out that there are differences.

[edit]

Admittedly, the slides do use the term "enumerable" as well, so that's another option. When there's a possibility for confusion with set theory, you can say "computably enumerable" as you suggested.

[edit] Made lots of edits. Hopefully, that's it.


> The problem is that while a superset of an uncountable set is uncountable, a superset of a computably uncountable set may instead be computably countable. The partial functions over the integers show that this is indeed the case.

> The computable countability of the partial maps from N to N.

Can anybody give an example?

does this have any to do with rationals? or is it more related to limits and calculus?


> does this have any to do with rationals? or is it more related to limits and calculus?

No.

I'm going to use Haskell, which I'm going to assume you know. I'm using it because it seems closer to the math. The type for naturals is:

data Nat = Zero | Succ Nat

and then all Haskell functions of type `Nat -> Nat` represent partial functions. They're not total functions because they might enter an infinite loop for some inputs. You can clearly enumerate all Haskell functions which are syntactically correct and which have type Nat->Nat, so the partial functions of that type are computably enumerable.

But consider the total functions of type Nat->Nat (i.e. those that never enter an infinite loop). Assume you can have a function `en :: Nat -> (Nat -> Nat)` which can output every total function of type Nat->Nat.

Then the function `counterexample n = Succ (en n n)` is a function that cannot be outputted by `en`, and therefore `en` fails to enumerate all total functions.

I've got other things to do, unfortunately, so I can't say more than this.

[edit] Fixed the counterexample.


> You can clearly enumerate all Haskell functions which are syntactically correct and which have type Nat->Nat

Is this a finite process?


Eh, he worked on important problems and made significant contributions. It's a rare case that scientists are actually oversold, and that's mostly a case of the public mistaking excellent popularizers of science for top scientists. It might seem that some gets too much attention, but that is because scientists in general aren't known at all and undeservedly obscure. Pulling down an icon wouldn't help them get any more public recognition, it would just leave the field without any publicly known names, like most areas of science (like, what are the big heroes of solar physics?).

Also see this: https://news.ycombinator.com/item?id=27536974

It's almost like Schmidhuber hates Turing and wants the world to worship Godel.

It's personal for him.

He also wrote a tweet and maybe another post where he ranked up all countries of Europe and says that Europeans collectively got the most medals in Olympics and hence they are superior.

He suffers from this weird and serious case of European Supremacy.

I know about his actual works (not what he claims to have done).

And based on that I respect him.

He also has got a cult of people following him and who will gang up on any dissenters, i.e. people not thinking of him as a walking god.

Schmidhuber has actual contributions, I know.

Now he moved to a country with extremely bad record of human rights, and has policies and implementation that put the Taliban to shame.

Now he is promoting the institution left and right any opportunity he gets.

He is very hard to take seriously.


Honest question here, how much of the CS theories play a role in the history of the practical computers we use today? For example, if we play back the history of the the modern computers that we use from a perspective of the CPU (with its instruction set), RAM, disk, high level languages, and operating systems... do they have a foundation in CS theories? Or do these constructs come from a more practical engineering basis, void of theory?

Many theories were developed after the practical implementations. It is really hard to separate the two. Relational databases, for example, is a case where the theory came first. But would have that been created without the experience of using hierarchical databases?

You can say something similar about a lot of inventions or discoveries that happened in a time when many others were working in the same area. Light bulbs, powered flight, calculus, the EMF laws etc.

History seems to like a single origin story.


Not just history and not just inventions (ok, this quote is pretty much history by now too):

"There's a tendency among the press to attribute the creation of a game to a single person," says Warren Spector, creator of Thief and Deus Ex.

I guess whoever wrote that line didn't even get the irony.

https://www.ign.com/articles/2001/11/12/deus-ex-2


If hollywood made a movie about it, then it most likely is oversold.

> especially in the Anglosphere.

What does he expect. We've always done this. Whether it is with thomas edison, the wright brothers, etc. Each nation/ethnic group oversells their achievements and minimizes others. We also did that in terms of gender/race. Women and minorities have been undersold.

The germans do the same thing. Gutenberg didn't invent the movable type. It existed in china, korea, etc for centuries before gutenburg.

It's human nature. All nations/groups do this. Imagine how native americans feel when we tell them columbus discovered the americas.

Everything the author wrote appears valid though. Turing didn't invent the computer. He didn't create computer science. He didn't win ww2. What he did is solve Hilbert's Decision Problem ( which church did before turing ) in a very interesting manner. With or without Turing, we'd still have computers, we'd still have computer science and we'd still have won ww2.

But sadly, the masses need heroes and heroes are created/manufactured by their advocates. Turing for a variety of reasons has more advocates than church, godel, etc. And unfortunately for the writer, it's an anglo world and our message will be the loudest and most visible.

The author states the truth but he is a lone voice in the wilderness. A few thousand will probably read his factual article while millions will watch the misleading "The Imitation Game" and read the misleading articles from the media. It must be frustating for the author, but truth is at a disadvantage to fiction here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: