Hacker News new | past | comments | ask | show | jobs | submit login
The Brain Is Not a Computer (2016) (aeon.co)
57 points by cjauvin 12 days ago | hide | past | web | favorite | 70 comments





Oh dear. This isn’t my field at all, and maybe I’m just doing a bad job of understanding their point, but this sounds completely bogus.

Really...the brain doesn’t create representations of visual stimuli or store memories? Under what possible definitions of those words can this statement be sensical?

Surely the author believes that visual stimuli cause measurable changes in brain state, and that people can indeed remember past visual stimuli. Then how is it true that brains don’t create representations of visual stimuli and store and retrieve them? I’m at a loss here.

Perhaps the author means that the brain doesn’t do these things in the same way as digital electronic computers we’re familiar with. That’s certainly the case at the most basic level.


I'm not sure I understand the piece either, but I think it's trying to say that human memories are associative, sequential, and distributed rather than localised.

So there isn't "a representation" in the discrete sense. It's more like the entire system changes, and it's impossible to physically scan specific elements of it to retrieve selected content.

You can trigger selective recall, but you're triggering a complex and noisy process which generates an experience that may include remembered elements - not pulling out a predictable bit pattern.

There isn't an exact equivalent in CS. Traditional binary memory is obviously nothing like human memory. Neural nets have some superficial similarities, but they lack generality.

I'm not completely convinced by the argument, but I'm glad someone is making it.

The problem with it is that we can remember specific discrete facts quite easily. If you ask me how many flats the key of F major has, I can tell you without being distracted by other memories.

What we don't know is how that fact is represented, how exactly my brain changed after I learned it, how similar those changes would be to changes in other brains learning the same fact, whether everyone has similar subjective experiences on recall, or how to scan someone's brain to check whether or not the fact is known.


I think researchers are working on this very thing, and it seems like the brain does store "visual representations" or at least it can't be ruled out at this point.

https://www.theregister.co.uk/2013/08/20/mindreading_mri_spo...

And of course, why wouldn't the brain store a visual representation of what you remembered. That would be the easiest way to store and retrieve it, which is why we do that on computers as well.


> Why wouldn't the brain...

Because it isn't how brains work. Recollection is re-experience.

The article you link says researchers taught a model how to match patterns to letters, given the presumption that they are letters, for a single subject's MRIs taken while they were experiencing the sight of words and letters. Not at all the same as saying a brain stores data.


At some point the data must be stored physically in the material that makes up our brain. How that is stored could very well be a 1 to 1 mapping, and shouldn't be ruled out.

And what if it is instead stored as I felt x, which led to y, which made me think z? It is conditioning of the pattern into neuronal potentials meaning that we can experience the same thing again. This is not the storage of data, it is the conditioning to react the same way at the start of a similar cascade.

We don't record the memory, we make it easier to feel and think the memory again. Like muscles adapting to exercise, our brains adapt to experience. Keep it up long enough and we get good at it.


If that's the case, how can anybody recall 1000 digits of pi or whatever insane number they're up to these days? People don't recall it based on their feelings. It's memory storage. How it's done is still being worked out.

Because you don't recall the digits, you recall yourself learning these digits. While doing so, you experience, you have feelings, although weak.

It is very hard to introspect into what cannot be introspected... but with such things as how many flats the key of F major has (which I have a very vague understanding of what the question even means) ... I have a hunch that such data recall is incredibly crosschecked and error-corrected by correlating to many many other known facts. You may associate to maths lessons in elementary school, visual charts, piano lessons, math knowledge, that special smell of the paper and number theory (not necessarily fancy, just simple stuff) that altogether makes it impossible to corrupt upon recalling, how many flats the key of F major has.

While as for the kinds of memories we easily do corrupt upon reading (was the perpetrator tall or blonde?) there may be no such data to draw from. Or worse, generalised "data" which is good for pattern matching but not for actual recall. (The stuff prejudice is made of.)

If so, then there is no contradiction or problem, just a very intricate mesh of data.

(Also I have an idea which is not even a hunch, but pulled out of thin air, that our "logic" and "memory" are much more intermixed in our wetware than in computers.)


Surely retrieving an individual fact is more like a conditioned response than it is storing some bytes on a harddrive?

When I say "what is 9 times 9" your brain activates all the pathways (probably trained in childhood) that lead to you thinking "81" in a similar way to Pavlov's dog.


So, the brain has a distributed and complex architecture. Well, did anybody not know that already? Evolution has no love for engineering beauty.

If recall is possible, then there is a representation there. It's obvious that there is a representation of Beethoven's 5th Symphony on the brain of somebody that can play it. It's just a convoluted, distributed and encoded in some completely crazy signal space. Yet, if it wasn't there, the person wouldn't be able to play it.


The encoding assumption in neuroscience is also nicely argued against in this preprint. https://www.biorxiv.org/content/biorxiv/early/2018/07/13/168...

Closest equivalent in CS would be lossy storage, Monte Carlo algorithms, or of course, neural networks.

Yes “computer” here means “digital electronic computer” and yes the storage, retrieval, and processing all happens very differently in the brain than digital computers are capable of.

You may think these are dumb points for the author to make, but it’s not clear to me at all that the media or the VCs who buy the hype about machine learning, AI, and self-driving cars realize just how different they are.


> Yes “computer” here means “digital electronic computer” and yes the storage, retrieval, and processing all happens very differently in the brain than digital computers are capable of.

No, it's incorrect to use "digital computer" here. More correct might be a von Neumann architecture computer. But then that shows that the author is attacking a strawman: people comparing brains to computers aren't limiting themselves to such an architecture.


A von Neumann architecture does not limit one to a single method of information storage or retrieval, it just limits the efficiency of different methods.

Are you sure? This mostly comes up in pop sci explanations where von Neumann architecture may as well be synonymous with "digital computer" because it's all the vast majority of readers are familiar with.

A "digital computer" is simply one in which information is represented in binary. The encoding of information does not alter the process of computation, so that's an irrelevant detail to compare to.

We have plenty of Harvard architecture computers (eg. DSPs), and there are plenty of other computational architectures (DNA computing, optical computing, quantum computing).


Sure but my point is they aren't what people think about when you say "digital computer" or use that as an analogy. People think about what they are familiar with in their every day lives.

I don't see how that's relevant. The article is attacking the claim "the brain is a computer". That means all computers.

The fact that the article's example were of a specific type of computer architecture, and had nothing specific to do with digital computers per se despite the article claiming it does, just proves my point that the argument is flawed.


Because it's about people's mental models. When you use an analogy ("like a digital computer") to examine another phenomena than of course people will interpret that in terms they are familiar with.

Otherwise what's the point in using an analogy?


The point of an analogy is to make a phenomenon understandable by showing a formal correspondence to phenomenon that is already understood. I've just spent 3 posts explaining that this analogy fails on multiple levels, so the argument by analogy is invalid.


Totally with you here. Not a neuroscientist. But I am a computer scientist. Lots of strong opinions, not a lot of strong facts. The analogies are interesting but they break down quick.

"We don't create representations of visual stimuli"..."We don't retrieve information or images or word from memory registers" Neither do computers in many cases. It's as if the author is saying because in the brain isn't a tape recorder or film camera then it doesn't work like a computer. Nope. Studies show that much like a computer we encode information as we store it. Because we encode it fancy ( or weird :-) doesn't mean its not encoded or retrieved. The Dollar Bill example is a red herring. Just ppitballing here: What if, instead of creating an image for the dollar, the subject's mind created a visual'thumbnail' and 'hash' of the real dollar. The thumbnail she can recall easily and lets her draw 'enough' of the bill to be vaguely recognizable. The hash, lets her recognize the real deal whenever she sees it. She simply compares the hash of this new object with list of hashes, and if its a dollar the mind finds a match. Of course, it far more sophisticated, but these simple methods, cleanly account for what the author observed.


>Surely the author believes that visual stimuli cause measurable changes in brain state

They repeat throughout that it does

> and that people can indeed remember past visual stimuli.

They agree also

>Then how is it true that brains don’t create representations of visual stimuli and store and retrieve them?

>Perhaps the author means that the brain doesn’t do these things in the same way as digital electronic computers we’re familiar

The author is definitely in agreement with the second statement from what I understand. However, I think where people are getting lost is that they expect the author to resolve definitions for "cognitive"/information processing terms (because in our day and age, they are identical) when actually the author is rhetorically refusing them validity, and on purpose. Hence why in those places where the author is expected to supply an equivalently robust counter-definition they instead pose a far more general definition, such as, with respect to learning, "the brain changes". The name for this rhetorical strategy is "refusing the blackmail of the Enlightenment".


The dollar bill thing seems silly. The fact you can draw anything without looking at a dollar bill means something is being stored, right? That means the brain stores information. There's no way out of that. And that fact that you can draw the dollar bill on cue means something is being retrieved. No way around that either. It doesn't matter how the information is represented. The brain as a computer analogy doesn't specify that "neurons are bits" or whatever.

I don't expect the brain to work like any computer we've ever built (which seems to be the point of view this writer is attacking), but I do expect that it has the capacity to store, retrieve, and process information and so the computer analogy seems useful.


Yeah, you could make the same complaints about jpeg compression.

Jpeg compression was my thought exactly. This author knows extremely little about computers.

It is trivially true that the brain is not a digital electronic computer. You cannot, however, use that simple fact to show that the brain is not some sort of information-processing device, and as for the notion that brains do not store information, I wonder what he thinks memories are.

The author concludes by asking "Given this reality, why do so many scientists talk about our mental life as if we were computers?" He offers no support for the proposition that this view is common, and I suspect he is often taking, as literal, speech that was intended to be metaphorical.


The author seems to have far to narrow an idea of what a computer is.

This seems like the appropriate response to the article. In fact, the article is even factually wrong on the matter of what we're born with, and it's even wrong on what it considers "information".

I suspect this misunderstanding of "information" is the core of the confusion. He needs to revisit physics and learn some computer science, because information and physics are inextricably intertwined, so the brain very much operates on information using rules.

Edit: and further, the brain is a finite state automaton due to the Bekenstein Bound, a physics theorem.


Yes, this is precisely what I thought as I read it.

I don't doubt his expertise on the brain side, but he characterises the computer in a very limited way, almost perfectly suited to make his argument.

With the reconstruction of the dollar there are plenty of examples how computers need not store entire instances of a thing to be able to recognise it later, such as applications of hash functions or doing spam filtering.


In a former life I was a cognitive neuroscience researcher.

This reads like a piece written by someone who heard a neuroscientist take issue with the "brain as computer" metaphor, but didn't quite grasp what it was all about.

The "brain is not a computer" meme has to do with the fact that the brain does not process information in the same way as a digital computer. It is not saying that the brain is not a symbol-processing/computational system.


I think us commenters are all on the same page here :)

The author is almost making it seem like models are reality and that people think that. They're not and I don't think anyone has ever thought they were...

Further and like other comments already mentioned, the brain is thought of and treated as a turing machine, not a digital computer. It's done this way, because the brain can be mapped to the definition of a turing machine.

And I have to defend Von Neumann. In his book, he explored turing equivalencies between the brain and computer concepts at the time used to implement the digital turing machine, he didn't actually think that the brain was a one-to-one mapping to a digital computer... He knew the difference between models and reality.

Even for the history of models the author mentions (hydraulics, automata, etc.), these all contain some turing equivalencies if implemented correctly and they were simply using the language and examples at the time to express this.

The author also continues to mangle any and all ideas of modeling, abstraction, and equivalence throughout the whole article. With regard to his 'uniqueness problem', I mean 'information loss' is modeled digitally for a reason.. just because humans are lossy, doesn't mean we can't model them that way. Think of a compressed image file.

I don't think there's a single researcher worth their salt that thinks the 'IP Metaphor' is gospel. That is just a grossly unscientific idea to assume.

We're all free to choose any model or collection of models we wish to approximate reality, but some of them work better than others and the brain is a complicated thing to model.

The author is trying to dramatize a triviality.


>The author is almost making it seem like models are reality and that people think that. They're not and I don't think anyone has ever thought they were...

The author is arguing that when there is a "monopoly" of models with respect to a given domain, like the brain, that inexorably tends to make the conceptual distinction between model/reality irrelevant. The author then goes on to cite examples of this, not only with respect to our current age's infatuation with the IP model, but _previous_ ages own repetition of this with respect to their own guiding technological framework (hydraulic engineering and the humors, etc)


I think you're very wrong if you think all professional neuroscientists are disinterested witnesses or that they all give adequate philosophical distance to the IP metaphor. Its assumptions constrain or at least impact the landscape of valuation of all academic neuroscience research.

Academic science isn't an apolitical, free space. We are not all free to choose any model, and what it means for a model to "work better" comes down to evaluative criteria that are almost always baked into a particular set of theoretical assumptions.


>The author is almost making it seem like models are reality and that people think that. They're not and I don't think anyone has ever thought they were...

that's where you're wrong. Way too many people, many of them engineers, consider models to represent reality, and that's a real, big problem, because this is deeply linked to how they consider science.


How can the brain be mapped to the definition of a Turning machine? It doesn’t have an equivalent to an infinite tape and it doens’t work accoriding to anything like the table of rules for a Turning machine. Can you point me to a reference for this claim?

Most of the comments I’ve read don’t like the article but almost all of the commenters I’ve read don’t seem to have studied this issue. It gives me the impression that these are visceral reactions. The article is not an article for experts. It’s expository in nature.

One thing that stood out for me was this quote:

The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner – perhaps even about the social context in which he or she was raised.

If true this seems to me (very much a non-expert) to give serious doubt to the notion that the brain is a computer.


The brain is a biological machine, it follows the laws of physics. There isn't any part of known physics that prevents us from simulating any number of atoms using a suitibly powerful computer.

A small aside, even simulating a small collection of quantum particles fully is enormously taxing with current computers and adding more particles increases complexity beyond just a linear increase. But this is a mathematical exercise.

Now, it's possible that the human brain depends on some law of physics that is not computable (possible to simulate on a computer), but given the level of study that had gone into neurons, along with the temperature of the brain vs the energy ranges we've examined with colliders, it seems super unlikely.

If it helps, Turing machines with n-dimensional tapes have been proven equivalent to the basic Turing machine.


The 'infinite tape' issue is a red herring that sometimes appears in discussions about these issues. Real computers (such as the one I am typing this on) are informally described as 'Turing equivalent' because they can implement a Universal Turing Machine up to the limitation imposed by their finite memory. An alternative way of looking at it is that their model of computation, augmented with unbounded memory, could implement or simulate a Universal Turing Machine.

This is not the equivocation that it may appear to be, as it establishes a sort of asymptotic boundary between what is possible and what is not (the more memory we have, the closer we can get to it.) It also means, for example, that we don't have to wonder if there is one computer instruction set or architecture that can perform computations that are impossible by another (again, up to having sufficient memory to complete it.)

The author of the claim you are questioning has not, so far, returned to explain what he means, but I think he is saying that the brain is Turing-equivalent in the informal sense given above: we can compute like a Turing machine, up to the available tape/memory (though with a very limited tape, if we are not writing things down...)

If that is so then I (one of the people here criticizing the article) must say that I don't think it is relevant. An alternative interpretation of the statement, that it says it has been shown that that there is a Turing machine equivalent to the human brain, would seem to depend on believing (as I happen to) that the brain's functioning is a matter of electro-biochemistry that could, in principle, be simulated by a computer, but no-one, so far, has given a demonstration, or even a convincing explanation, of how that works at a Turing-machine level of abstraction.

With regard to the quote you offer: I think it is a simple case of rhetorical overreach -- one might need to know the entire history of that brain to fully understand everything there is to know about its current state, but that does not mean that, absent that full history, the state is meaningless. In understanding what a person is thinking, what they remember (which is an aspect of their brain's state) is more important than what actually happened.


I’m a mathematician and tend to take things literally. I should not have mentioned the infinite tape part. What I should have said is that according to the article we don’t store memories in the way that a Turing machine does. There is no tape as such and there is no set of rules that the brains abides in terms of how to do the next step so to speak.

I gathered that the quote I referenced means that the state of a brain at time t is not sufficient to reconstruct memories or other meaningful information. The fundamental point of contention between you and others criticizing the article appears to be that you all believe that there is a storage mechanism in the brain in a similar (analogous?) fashion as a computer. I gather the author claims this is not so. Information is not stored in neurons in such a way that one “retrieves” it by accessing a storage location.

I don’t know enough about this stuff to intelligently comment on the veracity of it. I just know that someone far more knowledgeable than me and just about everyone else commenting says that our intuition about how this stuff works is wrong. That alone is worth causing me to reconsider my intuition on this stuff.


Speaking for myself (I don't necessarily agree with everything that has been said in opposition to this article), I think you are missing my point about memory.

The author is saying our brains do not function like our digital computers, something I think we do all agree on. It is not so clear how the author thinks our brains do work, but he apparently wants us to stop using computer metaphors when discussing their function.

He would have this prohibition extend to the notion that our brains store and retrieve information, which is absurd; one might as well argue that a computer is not a Turing-equivalent device because RAM is not a tape. The author says that scientists will never find copies of words or grammatical rules in the brain, and if, by copies, he means coded in something like UTF-8, then that is, of course, true, but beside the point: if his brain did not have some mechanism that supports the storage and retrieval of this information in some manner, how was he able to write the article in the first place? He claims you won't find copies of Beethoven's 5th. symphony in a brain, but I suspect that at least Beethoven himself, and many conductors of the piece, have had just that - and the soloists who play his piano concertos are not reading from a score, so where does that come from?

I think the author may have ended up making these absurd claims because he is trying to use the trivial brain-does-not-function-like-a-computer argument to prove something that is just an unargued-for intuition: he doesn't seem to think RAM (or perhaps any form of physical information store) could possibly be the foundation for something that works like human memory. He is apparently unaware of the extent that software such as neural networks (or even relational databases) have already extended the concept of information storage and retrieval beyond the simple model of randomly-addressable bytes (which does not, of course, make the point that human memory is like a computer's; what it does show is that the author's low-level comparisons are insufficient to make the larger point he is trying to squeeze out of them.)


My take on the article and in particular the quote that I referenced in my original post is that the author does not think memory is stored in the way that you and I think it is. The way I think of the brain working with regard to memory is analogous to how computers store information. I’m unable to model it in any other way. But then there’s the quote in the article that even if I had a snapshot of the brain at time t I would not be able to reconstruct something meaningful without knowing the history of that brain’s owner.

I don’t know enough to understand how that is possible or why someone knowledgeable about this stuff thinks this. I have basically the same conception of the brain and how it works as you do. But I’m confronted with the fact that a person far more knowledgeable than me thinks otherwise. It is that fact that causes me to persist in my view with caution. The author may be a crank. I don’t know.


This seems like a really fine distinction, but it's more of an unbounded tape rather than an infinite tape. The tape can only reach infinite length after infinite time.

True, but for a finite-tape machine, the possible outcomes include 'ran out of tape'. Also, for any finite-tape machine, there is a finite-tape machine that computes whether the first halts, runs out of tape, or runs forever.

> If true this seems to me (very much a non-expert) to give serious doubt to the notion that the brain is a computer.

Yeah. It also puts Quantum Mechanics into serious doubt...

So, I'll wait for better evidence than some random person on the internet thinking it feels right.


If I’m the random person you are referring to then you should not rely on evidence from me. My conclusion was based on what was written by the author of the article. I quoted from the article. You may want to rely on what the author says or on another expert. I did state that my opinion was a non expert one. The quote I made said that even if one had a snapshot of the current state of a person’s brain that would be meaningless without the full history of that brain’s owner. That’s a pretty damning fact if one wants to think of a brain as a computer.

I see from another comment that you disagree with the author. I’m guessing you aren’t an expert in this area either. Do you have more than visceral reasons to doubt the author’s conclusion? The author’s credentials seem to imply that he has thought about these issues far more that you or I. He presumably knows much more about how the brain works than you do. Why do you think your few minutes of thought on this article are enough to discount what he says and his conclusions? Isn’t that a bit arrogant?


Well, your comments are both much more reasonable and deserve much more confidence than the article. But no, not your comments, not the article, and not the quoted neurologist even assuming the article's author understood him correctly (what would be an exception) are enough.

That kind of claim requires a clear and reproducible effect, and confirmation of many people that they looked and found the it. That's the kind of claim that if I did an experiment and got the result myself, I wouldn't still trust it.


I understand your position better now. Thank you.

The quote I made certainly goes against my intuition. But I’ve not studied the issue and don’t know enough about it to trust my intuition. Apparently though there are people who have studied this at the level of an expert and they believe it. At the very least this indicates that possibly my intuition is wrong and that I shouldn’t be so ready to discount the article.


The author's notion of computer does not serve him well. It is too grounded in his experience of digital computing devices rather than an understanding of computing as a kind of process. Furthermore, the field of computational neuroscience is doing quite well, thank you. Tempral difference learning is both an algorithm and instantiated in brains in some form.

Isn't this directly contradicted by the grid cells? https://en.m.wikipedia.org/wiki/Grid_cell

It sounds like because the author doesn't understand how neurons create a representation of reality, they're splitting hairs and saying it doesn't.


The author mentions Beethoven. Consider what a virtuoso pianist must go through to create a performance of a 40-minute-long sonata. Yes, the performance includes her personal interpretation of what's written on a piece of paper. It is almost certainly influenced by performances others have created.

But when it comes to the individual notes, their sequence had damn well better be literally correct for the entire performance. If not, someone in the audience will certainly notice that one flubbed note.

So in learning the work, "she was changed in some way" all right. As some members of the audience had been ... identically. And that 'some way' certainly resembles pulling bytes out of 'storage'.


I've been meaning to write an essay for a long time now about how our personal abstractions of things and how they are defined/work can make nuanced discussions about certain ideas difficult; I believe this post is a victim of that.

“The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary.”

I actually like this as an idea that our tools for understanding brain functions are still too primitive and the traditional comment base compute models are lacking.


I’m struggling with the definition of “retrieve” used here. According to the meanings of “retrieve” that I’m familiar with, I can’t conceive of any sense in which songs or finger movements aren’t retrieved by the brain.

"I can’t conceive of any sense in which songs or finger movements aren’t retrieved by the brain."

And that's the issue, isn't it? I think he's using "retrieve" to mean something much closer to what a digital computer does. Ie, there's a single "place" in the brain where the song is compressed and stored. My definition of "retrieve" (and yours, I think) is implicitly more relaxed; I say that a rough distributed system that reacts to stimuli like songs by being able to approximately reproduce them later counts as "retrieval."

As other commenters have noted, the article uses a very restrictive definition of computation/retrieval. I mean, earlier in the article, he gives a definition of the same game that the "Lifelong Learning" and "Reinforcement Learning" people use.

I think he's actually on the same page as many learning theorists, and is just trying to make it clear to a general audience that a very "tight" match between the brain and Von Neumann machines isn't reasonable.


Perhaps that’s what he meant, but does anyone even claim that that is the case?

Models are not equivalent to the phenomena they describe.

Computational models are not an exception to this.

There is not even a single "part" or "function" of the brain that we fully, exhaustively understand through a computational explanation. All claims of certainty are premature.

What's really fascinating and really needs the attention of historians and anthropologists is why in this current historical moment so many STEM educated people who are otherwise very bright end up confused about this. Maybe the answer is obvious though.


The author seems to be hung up on a distinction between two representations and trying to argue that they cannot be the same thing, despite the fact that we have abundant evidence that both are readily interconvertible. Now, I would agree that a neural net that converges to a grammar might not be a grammar, but at that point we would seem to be missing the forest for the trees.

Imagine a database that stores strings using a common prefix method, one could make the same claim: this database does not store or retrieve strings. And yet it does.

The model of what something does is implemented by an underlying mechanism. But for many reasons the mechanism doesn't have to be, and often isn't, a naive translation of the model.


> Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences.

What does he mean by "neural structure" here and how is it different from "memory" and "representations" which supposedly we don't have?


Funny that "computer" was originally a metaphor applied to the machine from the human occupation.

Might be an interesting experiment to train a neural network to distinguish between different currencies, and then visualize the features that correspond to the “one dollar neuron”. It might turn out not that far from the drawings of the author’s students.

Dunning Kruger effect going on with the author of the article. He's too ignorant of the subject he's writing about, to understand the difference between the low-level way computers works and the high level sophistication algorithms can exhibit.

Quite ironically, I think his line of thought shows precisely why the brain is probably quit like a computer. The algorithm going on in his brain was probably like this:

1. Assuming I'm like a computer leads to negative emotions (because lack of free will and reduction in self-esteem it implies). 2. Therefore give high weights to facts contradicting this, and low weights to facts supporting this. 3. For a range of subjects regarding the behavior of the brain, do: 3.1. If the subject feels like it's logically supporting my view on the subject, add it to the article. 3.1.1. Anything I know the brain does and I have no clue on how can a computer do, will automatically feel like it supports my conclusion. Since I'm pretty clueless as to how computers work in general, most things are actually going to seem like something a computer can't do. 3.2. Otherwise, ignore this and keep going on to the next example.


This article would have been more relevant several decades ago when AI research started. But now I think it's mainly a strawman argument because the models are more realistic and different from what he is talking about.

The author smugly assumes we all lack imagination in how the human mind might work, when in reality, it is he who lacks imagination in how computers or algorithms might someday work.

Neither is a computer a brain.

But the brain and the computer are both pattern recognizing feedback loop one just isn't as developed yet.

The computer doesn't see the image but neither do I. We simulate it.



The article has so many errors that it is hard to write a reply. Let's pick one:

> The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’)

The author should have read https://en.wikipedia.org/wiki/Analog_computer


Meta question: Since this article is a repost and the comments from 2016 and 2019 say the article is largely incorrect, by what process did it make it to the front page. - tl;dr: this is a repost, and panned article, how's it here?



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: