Hacker News new | past | comments | ask | show | jobs | submit login

They arent lower-bounded by thermo bits --- because one can specify algorithms which require no thermodynamic work to implement. Logical bits and thermo bits are related by contingent facts of implementation. One has first to specify an algorithm (defined in terms of a computational model), then it's an open question as to what-and-how it'll be implemented.

It's also not at all clear that the physics terms "entropy, information, bits, etc." have anything to do with their computational "equivalents". Only by fairly strained thought experiments do we get alleged connections. Even these thought experiments only provide extremely limited translation of these terms between domains.

"Information" in a "logical" sense is a radically different think than in a "thermodynamic sense"... for example, the former has an obvious observer-independent definition, the latter does not.

The whole game of trying to bridge these notions without specifying implementation relations (etc.) is largely the new form of that transhumanism-craze: the respectable ideological space of delusional techno-utopian hopes.




That's pretty much useless, though. We know for a fact that the brain uses irreversible computational processes and that essentially all the input information is erased. So the lower bound is indeed valid. There is indeed a necessary connection between thermodynamic bits and logical bits. Indeed, since we know that we can assume irreversible computation, we know that the computation isn't dominated by those zero-work algorithms.

And indeed, if you minimally look at the neuronal model of computation, you can obviously see that computation is going to be irreversible (though reversible calculation is possible in theory).

Now, you're right that there is a lot of wiggle-room for implementation, but the lower-bound is indeed robust. So there is clearly value to the argument.


Any correspondence between formal properties of the algorithm, namely, the logical model of the system and its physical properties *requires* (1) the algorithm; and (2) the implementation model.

Speaking about "computational processes" and "reversibility" at all, absent these, is meaningless.

What exactly, of the brain is the "computational process" ? What exactly is "irreversable" ? This is really just pseduo-science, though it may not seem it.

We have no idea whatsoever what a logical model of anything to do with animal intelligence is, and hence, absolutely no idea what properties of animals (local to the brain or otherwise) are relevant to them implementing this logical model. To say any process of the brain is "computational" is either to say something useless (namely in the sense in which every process is "presumably, somehow computational, given a logical model") -- or, to say something pseudoscientific.

I would agree that animals, in modifying their environments by conceptualising them and developing skillful techniques to regulate themselves in response to them (ie., largely: intelligence), are highly thermodynamically irreversible systems.

This isnt a useful observation, given in pseudo-csci terminology, absent a correspondence between this physical facts and the presumed logical model of the computation going on.

If the whole of reality is an algorithm, it's one (via energy conservation) which requires zero energy to run. Ie., "logical bit" and "thermal bit" are radically different notions. They are connected contingently when one has an algorithm to-hand, and knows how it will be implemented.

There's nothing to be said about the logical bits of animal intelligence, ie., nothing to be said computationally, because we have no idea what they are.


No, there is no pseudo-science there, except when one takes the statements to mean more than they actually mean.

>Any correspondence between formal properties of the algorithm, namely, the logical model of the system and its physical properties requires (1) the algorithm; and (2) the implementation model.

>Speaking about "computational processes" and "reversibility" at all, absent these, is meaningless.

This is simply not true. We start from the assumption (as does all theoretical CS) that computational models are equivalent in capability.

We observe that the brain has inputs and outputs. We observe that the outputs are at least partially determined by the inputs.

From this, we can conclude rigorously that the brain does computation, and there is thus a computational process going on in the brain.

>What exactly, of the brain is the "computational process" ?

The correlation between inputs and outputs that follows a process in which information is transformed. This is readily observable.

>What exactly is "irreversable" ?

It is impossible to reconstruct the input from the output, therefore the computation is said to be irreversible, and is thus subject to various thermodynamic limits.

Therefore, we can rigorously conclude that the brain performs irreversible computation.

>We have no idea whatsoever what a logical model of anything to do with animal intelligence is, and hence, absolutely no idea what properties of animals (local to the brain or otherwise) are relevant to them implementing this logical model. To say any process of the brain is "computational" is either to say something useless (namely in the sense in which every process is "presumably, somehow computational, given a logical model") -- or, to say something pseudoscientific.

Now you're just taking what I said far above and beyond its actual meaning, and taking that interpretation to be pseudo-scientific. I said nothing about intelligence, all I'm saying is that there are computational processes going on inside the brain, and that those are irreversible. We don't need to know what algorithm is going on, nor do we need to know the precise model of computation, to be able to draw conclusions.

One of the conclusions we can draw is that the brain executes irreversible computation, and that the general algorithms implementing those computations must not be zero-work.

That is done without needing to know the details you seem to argue are necessary to draw such a conclusion.

We can also conclude more from this. We can, for example, place lower and upper bounds on the information being processed by various elements.

Now, someone could take this methodology and abuse it, or, as the article does, use it in conjunction with supplementary assumptions and go beyond absolute rigour.

>If the whole of reality is an algorithm, it's one (via energy conservation) which requires zero energy to run. Ie., "logical bit" and "thermal bit" are radically different notions. They are connected contingently when one has an algorithm to-hand, and knows how it will be implemented.

Now you're going way beyond what we can rigorously ascertain. If you consider the whole of reality to be a computer, then what are the inputs, and what are the outputs? Perhaps you consider the process of time to be an algorithm with the past as an input, in which case it is an algorithm that does require energy to run because it's performing irreversible computation, unless there is hidden state somewhere.


"computation" is only equivalent when it's calculative, ie., when the algorithm in question is merely computing some number.

The reason the LCD screen displays some output isn't because the electrical switches have some numerical state, its because they have some electrical state.

The sense in which "computer" describes any system is trivial, for there to be any empirical content to computational language, we need an empirical model of the relevant algorithms the computer is performing.

My kettle is also a computer: water is its state, boiling is the "computational process", and its change of state is the "number being computed".

But it is only a kettle because that "calculation" which computes a number is a magnitude which is implemented by the kinetic state of the water.

The sense in which "computers are equivalent" is *empirically empty*. There is no scientific content to this; it is merely a statement of pure mathematics. To use this language, of pure discrete mathematics, as-if it is informative about empirical systems *is pseudo-science*.

One may as well say the brain is a geometrical system which is extended in Euclidean space, and we know topologically, that all such systems are geometrically equivalent.

The world science studies (unlike that of pure mathematics), is extended in space and time, and has properties (eg., charge, mass, etc.). The number "34029348309384398" is only a frame of a video game when it names (charge, mass, extention, duration...) in a highly particular manner.

Unless you have an algorithm in mind, and a model which says how its numerical content corresponds to physical properties, you arent saying anything empirical at all.


>"computation" is only equivalent when it's calculative, ie., when the algorithm in question is merely computing some number.

Whoever said that computation has to be with numbers? Here we are seeing the brain as calculative, because the output of the brain is some function of it's input, is it not? Surely we can agree that this is an important, crucial, and interesting function of the brain, that is worthwhile to study? I'm not saying that this is necessarily all that the brain does, but it's an interesting and unresolved dimension of what the brain does, perhaps even the most interesting.

>The reason the LCD screen displays some output isn't because the electrical switches have some numerical state, its because they have some electrical state.

Sure, I don't see how that's an issue. Why does computation have to be on numerical states? It can be on any kind of state at all, even continuous states. Be it water pressure, base pairs in DNA, luminosity, anything at all that can represent data. In fact, some of the earliest algorithms were operating on lines and circles, which are neither numerical nor even discrete.

>My kettle is also a computer: water is its state, boiling is the "computational process", and its change of state is the "number being computed".

Sure, you could see it that way. But computation isn't the only thing your kettle is doing - and it's not the interesting part about it either.

> The sense in which "computers are equivalent" is empirically empty. There is no scientific content to this; it is merely a statement of pure mathematics. To use this language, of pure discrete mathematics, as-if it is informative about empirical systems is pseudo-science.

It's far from purely mathematical, nor pseudo-scientific. Sure, you could define almost anything to do some computation, but that doesn't mean the computation it is doing is worthwhile, or an interesting dimension of its operation. Certainly, however, the computational dimension of the human brain - that is, how it manipulates data - is the most interesting part of it. It's clearly informative - in this case we can conclude that the brain does irreversible computation, and thus establish various bounds on how it operates.

> One may as well say the brain is a geometrical system which is extended in Euclidean space, and we know topologically, that all such systems are geometrically equivalent.

Sure, we can say that. How is this helpful in this context? Understanding the brain as doing computation is certainly helpful, and perhaps understanding it as topologically equivalent to other objects is too, but I can't really see how.

>Unless you have an algorithm in mind, and a model which says how its numerical content corresponds to physical properties, you arent saying anything empirical at all.

Again, why does an algorithm even require a numerical content? All an algorithm needs is data, and we clearly have data going in and out, which constrains the physical system that is processing that data.


A computation is just an implementation of a function `f: {0,1}^N-> {0,1}^M`... if not, what is the meaning of the word at all?

All these phrases "manipulates data", "computation", and so on... they dont mean anything. A kettle "manipulates data".

You think you're saying something empirically significant about the brain using this language, but this language has no empirical content. It is a language of pure mathematics.

All of these claims are true of any system. What do we learn when we hear that "the brain is a computer" (or whatever else you wish to say). We dont learn anything.

If you can provide a logical model of the algorithm the brain is performing, and a model of *how the brain implements it*, then we learn something.

Saying "the brain is a computer" is basically no different than saying "it can be described, somehow, by mathematics".


> A computation is just an implementation of a function `f: {0,1}^N-> {0,1}^M`... if not, what is the meaning of the word at all?

Sure, that's a definition. It happens to be mathematically equivalent to the calculation of any observable signal in the real world, because there exist isomorphisms between {0,1}^N and the set of functions of maximum frequency f over time t, for example.

> All these phrases "manipulates data", "computation", and so on... they dont mean anything

They are certainly meaningful.

> A kettle "manipulates data".

Yes, it does, but in a very trivial and uninteresting way. Crucially, it does more than just manipulate data, but manipulating data is one of the things a kettle can do.

> You think you're saying something empirically significant about the brain using this language, but this language has no empirical content. It is a language of pure mathematics.

We certainly can say empirically significant things about the brain, since facts about the way in which it performs computations can be inferred, and from this we can infer information about how the brain must be structured. Trivially for example, we can infer that the brain must be able to have a minimum amount of outgoing total neuronal bandwidth by analyzing the makeup of outputs in can compute, and indeed we can do more.

> All of these claims are true of any system.

Of course, they are. But for some systems they can provide more insight than for others. The computation a kettle can do is insignificant and largely irrelevant to what interests us kettle-wise, so we don't really care. However, the way in which the brain manipulates data is much more interesting and intricate (and we can of course place lower and upper bounds on how intricate it is), and correspondingly we can learn much more.

> If you can provide a logical model of the algorithm the brain is performing, and a model of how the brain implements it, then we learn something.

I don't need to provide a model of the algorithm the brain is performing nor of how it is implemented to learn things. Thanks to various interesting results in computer science (in conjunction with results in the hard sciences), we can learn things about the brain without needing at all to know which algorithm it is implementing, and without knowing much of how it is implemented. At the extreme of knowing little about the implementation, we have physical lower bounds of physically possible implementations of any given computation, regardless of the algorithm.

CS theory allows us to infer facts about algorithms by knowing the computation.

For example, if I have a black box that inputs a list and outputs a sorted list, since I know that it is impossible to sort a list in the general case faster than n log n, and since it is impossible to know the original list from the sorted list, knowing the Landauer limit, I can infer for example a minimum power consumption. I don't need to know anything about the implementation or algorithm.


Thank you both for writing in considerable detail to seek clarification.

I can tell some aspects are still lost in translation, though. It isn't easy.


> one can specify algorithms which require no thermodynamic work to implement

Can you give an example? To my limited understanding, performing work without expending energy sounds like perpetual motion.


Sure, this is the trap of thinking of computer science as either being about computers (machines) or about science. As mostly a kind of pure discrete mathematics, we need to be careful.

Consider an algorithm which says:

    while(true) state *= +1, state *= -1, state *= +1, ...
Now, identify the +1 state as the earth when in one-half of an orbit, and the -1 as the earth in the other half. And therefore the position of the earth as the logical bit ("the state") and its movement as the change to the logical bit.

This is "perpetual motion", but the technical name in physics for this is inertial motion, and its common. Motion itself doesn't require work, using that motion for work, requires work.

See also https://en.wikipedia.org/wiki/Reversible_computing


I was on a workshop once where the topic was entropy and somehow we got into a discussion regarding Maxwell's Damon... This strongly reminds me of that discussion.

The problem for the Damon is, that it needs to change the state of the trap according to the state of the incoming particle. And if you just hand wave "such a decision making thing exists", then you have your contradiction. But we tried, for several days, to come up with _any_ implementation (including fantasy materials) that could conceivably exist _and_ produce that effect. And for each and every attempt to build one, we came up _immediately_ with diffusive parts, where energy _must_ be lost. We concluded, that while non of us would feel confident to _rule out_ a possible existence of Maxwell's Damon, we wouldn't _at all_ be surprised if it could be ruled out.

So while you are entirely correct, with enough hand-waviness, you can build reversible computations. But I have yet to see an argument, where a _potential_ implementation of one is argued to the end.


It's sufficient for my purposes just to show that "bit" in a logical model and "bit" under some idealised thermodynamic thought experiment are radically different notions.

Reality, i am sure, has many systems which are settable and measurable and changeable at some minimum energy... and which can interface with devices of interest. For any given problem, the limit case energy requirement is defined by the needs of the algorithm. If we require setting a highly complex input state, and if we require interactions with certain devices, then we've immediately ruled out a great deal.

These systems would provide you with a certain kind of "limit-case correspondence" between "logical bits" and "ideal physical bits" --- but we dont know what this system is. You dont get it from just playing around with units, nor these kinds of thought experiments. You need to know what algorithm you're talking about, and what it's requirements are.

If the algorithm is understood just to be "the whole of reality" and if we suppose that it is fundamentally just aggregates of discrete states being flipped (to me, highly unlikely).... then the energy requirements are Everything... which sum, i imagine (via energy conservation), to zero.


An enjoyable and approachable text with more detail on reversible computing and energy expenditure from the perspective of physics is the Feynman Lectures on Computation[1].

[1] https://www.goodreads.com/book/show/17274.Feynman_Lectures_O...


If you really want to be pedantic, even infinite inertial motion isn't possible, because a true vacuum is impossible, and there is thus necessarily drag somewhere.

Also, I've said that before, but we already know that brains operate under an irreversible computation model.


> Also, I've said that before, but we already know that brains operate under an irreversible computation model.

We don't know that brains operate under any kind of computational model at all. It's often postulated, but it's not proved. Every attempt I've seen at a proof reduces to begging the question.

Edit: To be clear I’m not saying a computational model of the brain can’t be a useful tool. Newtonian physics works quite well quite often even though reality isn’t Newtonian.


To be clear, I'm not saying that everything the brain does can be modelled by a known computation model. All I'm saying is that the interesting part of what the brain does is computation, in that it takes in data, operates on it, and returns data. It does this in an irreversible manner because you cannot determine the input from the output (nor a significant part of it).

If there is any model of how the brain works it will be a computational model. Perhaps a new one, and perhaps a radically different one, but it's still going to find the definition of a computational model.


Ultimately, the open question here is this: are uncomputable functions just a mathematical fancy, or do there really exist processes that can only be fully correctly described by uncomputable functions?

Personally I lean towards latter view. I’m the first to admit that I have no proof. It’s just my belief, because I find the metaphysical evidence compelling. I don’t object to investigating the former possibility, but I also don’t care for it just being baldly asserted.

If the answer is affirmative, that means that science will probably never be solved and we’ll just have to content ourselves with incremental improvements in our understanding. That’s observably been the case up until now. Granted even if all processes are in fact described entirely by computable functions we might still never discover what they are.

I hope it’s clear how all that relates to the concrete problem of understanding human cognition and the brain.

I'm nowhere near smart enough to even begin to conceive of a mathematical framework for taming noncomputable functions in a pragmatic way, but I earnestly hope some genius comes along who is, supposing that noncomputable functions are needed to completely describe our reality.


Well, those functions are called uncomputable because, as far as we know, there is no repeatable way of computing them, right?

So if the human brain was able to compute functions that Turing machines can't, it doesn't mean those functions are uncomputable, it just means the Church-Turing hypothesis is false.


The Church-Turing hypothesis just means that the lambda calculus and Turing machines are isomorphic. It doesn’t actually tell us anything about reality, no matter what enthusiastic misunderstandings might say.


No, it doesn't. The Church Turing hypothesis is much stronger - it states that all real-world computation can be done by lambda calculus or Turing machines.

That's why it's a hypothesis and not a theorem - it's pretty easy to prove that lambda calculus and Turing machines are equivalent if you have the mindset of a programmer.

Put in other words, the Church Turing hypothesis is that there is no computing model higher in hierarchy than the TM and lambda-calculus.

> It doesn’t actually tell us anything about reality

Well, of course it doesn't tell is anything about reality. That's because no one managed to prove it, and I suspect no one ever will.


Ah yes. Fair enough. I confused the thesis with the hypothesis. This comes back to the begging the question I initially objected to.

Also, the notion of highest computing model is itself begging the question in the sense that the framing assumes a computational model.

So much for undergraduate philosophy. What’s your opinion? Are uncomputable functions just fanciful irrelevancies or are there processes in reality that can’t be described by computable functions? If so or if not can you propose how we might know?


> I confused the thesis with the hypothesis.

I'm actually relatively confident the two are the same thing. I don't think any special name was given to the equivalency between lambda calculus and TMs.

> This comes back to the begging the question I initially objected to.

I don't think it does! Of course if someone assumes the thesis is true then they're begging the question, but I don't think that's what I am doing here, indeed :

> Also, the notion of highest computing model is itself begging the question in the sense that the framing assumes a computational model.

The definition of computation I'm using is that there exists a process somehow that can go from the input to the output. If that process can't be replicated by a TM then so be it, it just means Church-Turing is false.

Actually, the original wording that Church used was "any function that can be computed by a mathematician.." (could be computed by a TM).

So I think that it's reasonable to frame it as computation - it would be begging the question if I assumed church Turing was true.

It doesn't help that the uncomputable function definition assumes it! But in reality that a function is uncomputable doesn't actually mean there is no framework of computation that can compute it, just that a TM can't.

> So much for undergraduate philosophy. What’s your opinion? Are uncomputable functions just fanciful irrelevancies or are there processes in reality that can’t be described by computable functions? If so or if not can you propose how we might know?

I've honestly spent so much time on the question I don't even know anymore. I'm leaning towards the side that uncomputable functions can actually be computed in reality. There's an interesting paper here : https://www.sciencedirect.com/science/article/pii/S221137971... which also links to an equally interesting 2002 paper that provide theories as to how you could compute uncomputable functions in the real world (with or without blackhole evaporation), but this is still speculative because our best theories in physics are still iffy at those scales, and obviously the physics is beyond my understanding here.

Anyways, I hope this answers the question of how we might know!


> The definition of computation I'm using is that there exists a process somehow that can go from the input to the output. If that process can't be replicated by a TM then so be it, it just means Church-Turing is false.

> Actually, the original wording that Church used was "any function that can be computed by a mathematician.." (could be computed by a TM).

That explains the disconnect. My working definition of computation is essentially the one Church gives there, with the understanding that he was talking about specifically computing the exact value of some function that could also be computed with a slide rule or some other effective procedure.

> So I think that it's reasonable to frame it as computation - it would be begging the question if I assumed church Turing was true.

Even assuming C-T, it's an enthymeme and not a syllogism. The unstated leg is something roughly like "the universe is a computer." Assuming that is basically assuming what is to be proved with respect to whether or not the brain is a computer or some higher category of reckoner that happens to be able to do everything a TM can, at least with enough time, pencil, and paper.

> I'm leaning towards the side that uncomputable functions can actually be computed in reality

For clarity it would probably be sensible to use a term other than computation for determining an exact value in this case. Anyhow, I believe I follow and I lean that way too.

There's a related metaphysical question too. I'm not sure how to state it exactly, but it's implied by questions like "When I throw a ball is reality computing a parabola and applying necessary modifications?" The alternative is that whatever way reality determines how that ball moves, it's not by what we'd call an effective procedure.

> Anyways, I hope this answers the question of how we might know!

Thanks for your insight and the link!


> Even assuming C-T, it's an enthymeme and not a syllogism. The unstated leg is something roughly like "the universe is a computer." Assuming that is basically assuming what is to be proved with respect to whether or not the brain is a computer or some higher category of reckoner that happens to be able to do everything a TM can, at least with enough time, pencil, and paper.

I don't think I'm making that assumption. I'm making an assumption that the process by which the brain determines outputs and inputs has some degree of repeatability - which I think is more than reasonable. Whether the brain is or isn't a computer in the classical sense of the word, there is clearly computation happening that relates inputs to outputs, right?

That is to say, if I took the same person and made them live exactly the same life (that is, everything outside how they react reacts the same way), I'd expect the outcomes as we repeat it more and more to approach some kind of distribution after a fixed amount of time t (that diverges as t gets larger exponentially). Note that this doesn't assume the presence or absence of free will, by the law of large numbers, it works either way.

From then on there is some kind of effective, repeatable procedure, so the brain does do some kind of computation. And of course that's not the only thing the brain does. But we don't need to have the universe as a computer, just the brain.

> There's a related metaphysical question too. I'm not sure how to state it exactly, but it's implied by questions like "When I throw a ball is reality computing a parabola and applying necessary modifications?" The alternative is that whatever way reality determines how that ball moves, it's not by what we'd call an effective procedure.

That's an interesting question. My attempt at an answer is that, if we were to understand the universe as a continuous process which can be defined by some kind of (recursive) semi-random, differential rules - but with maximum frequency - then yes you could say that reality is an ongoing computation. Or you could take another crack at it and think that reality is just as set of probabilities and joint probabilities that merely get (partially) sampled - in which case there isn't really any computation happening, we're just observing a tiny sliver of reality as the ball travels the probability space. There are certainly even other ways to see it. I think we it's a matter of perspective as to how you see it.

Thanks for the conversation! It is great and I've been able to develop my understanding of these concepts :)


Are you trying to argue that reversible computing is classical computing, that the second law of thermodynamics doesn't exist, that modelling the brain as a classical computer/thermal process (ie. something that creates information) is presumptive, or something else? All of these concepts have been explored in depth for decades, and you don't appear to be acknowledging the existing body of work.


None of those things. My point is that logical models of computation dont, in themselves, have empirical content. That the language and ideas of theoretical computer science are, as with any area of pure mathematics, only empirically useful insofar as we establish how a system corresponds to a formal model.


This is patently untrue though. Information theory exists. Entropy exists. Irreversible computation is fundamentally tied to entropy increase. It doesn't matter what system you are using, if you create/delete a bit, you create heat. No known systems are limited by this, but is looming.


All of those words have different meanings in csci and in thermo, and in qm, and in many areas. They arent the same word.

They are only connected when empirical-logical models are provided. p


Can you provide a non-theoretical example where an extant inertial body actually does no work? I believe this is impossible. You’ve just moved the assumptions to where they frame your perspective better than that other thing which competes with your perspective.


Energy is always conserved. Just define the computer to be the system in which energy is conserved, and there you go.

A "computer" is a formal pure-mathematics notion, it is just a certain sort of "discrete mathematical model". One can define a computational model of any physical system, and hence, find computers in which energy is conserved.


If we could formalize "observe the result" we would, and it would require exchanging energy with the computer, breaking its conservation. The only way to avoid it is to say that we were always a part of the computation, but things seem to get very odd from there.

You mention elsewhere that the whole universe might be construed as a computation in which energy is conserved. Why not take the absurd reduction for what it is though, and conclude that it means there's something wrong with calling anything-at-all a computer?


It's not an absurd reduction, it follows from the definition of a universal turing machine. This is how completely non-empirical theoretical models of computation are: they are devices of pure mathematics.

This is exactly my point.

If you want to use the formal machinery of computer science to say something empirical, you cant just hijack terminology and speak in this pseudoscientific way, ie., "the brain as a computational process" (etc.).

What is the empirical content of such a claim?

There might be some if you defined what algorithm the brain was computing, and what the empirical correspondance between the algorithm and the brain way (etc. etc.) -- but no one is doing this.

We are speaking as-if somehow "universal turing machines" had some empirical content, that there's something insightful about labelling the brain this way (vs. anything at all). There isnt.


Sure, it's merely an idea that is believed, like the Greeks believed that the night stars were heros, or Galileo that Jupiter had satellites, or Newton that F=ma everywhere and that God had set up the solar system.

Some of these theories work, others don't. Nobody had a truly solid reason to believe them before it was clear they worked (in the case of those that did). Scientists all believe things they have no strict reason to -- when they're wrong they're wrong, and when they're right it's a scientific discovery.

Planck had no reason to postulate the energy packets when he came up with them. He described it as a move of pure desperation. The explanation (such as it is) for why this worked wasn't developed until decades later.

I don't personally believe Turing machines reveal much about the human brain, but you're verging on a more general claim about how science should be performed. That to hypothesize, work within a model, or even get something out of it you need to already have a precise description of the phenomenon being described and "why" the model might work. None of that is required though.

To be clear this all applies much less to "bread and butter" science. What we're faced with here is a process that has no convincing description.

To convince people to give up on computational models of the brain you need to convince them that it doesn't work. Nothing has been revealed by it in 60+ years. It's never predicted anything. Neurobiology and pharmacology at least have some results to show about a real brain. What you're basically engaging in instead is philosophy -- "pure" math is distinct from the empirical world, what if the whole universe, science requires this specific method, ... But it's better if we can dismiss a scientific idea on scientific grounds, rather than philosophical considerations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: